Nov 1 00:42:18.115278 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Oct 31 23:02:53 -00 2025 Nov 1 00:42:18.115324 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:42:18.115342 kernel: BIOS-provided physical RAM map: Nov 1 00:42:18.115355 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Nov 1 00:42:18.115368 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Nov 1 00:42:18.115381 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Nov 1 00:42:18.115400 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Nov 1 00:42:18.115413 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Nov 1 00:42:18.115427 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd26afff] usable Nov 1 00:42:18.115440 kernel: BIOS-e820: [mem 0x00000000bd26b000-0x00000000bd275fff] ACPI data Nov 1 00:42:18.115454 kernel: BIOS-e820: [mem 0x00000000bd276000-0x00000000bf8ecfff] usable Nov 1 00:42:18.115467 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Nov 1 00:42:18.115480 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Nov 1 00:42:18.115494 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Nov 1 00:42:18.115514 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Nov 1 00:42:18.115529 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Nov 1 00:42:18.115543 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Nov 1 00:42:18.115557 kernel: NX (Execute Disable) protection: active Nov 1 00:42:18.115572 kernel: efi: EFI v2.70 by EDK II Nov 1 00:42:18.115587 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd327018 RNG=0xbfb73018 TPMEventLog=0xbd26b018 Nov 1 00:42:18.115602 kernel: random: crng init done Nov 1 00:42:18.115617 kernel: SMBIOS 2.4 present. Nov 1 00:42:18.115635 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025 Nov 1 00:42:18.115649 kernel: Hypervisor detected: KVM Nov 1 00:42:18.115673 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:42:18.115687 kernel: kvm-clock: cpu 0, msr 4a1a0001, primary cpu clock Nov 1 00:42:18.115702 kernel: kvm-clock: using sched offset of 14137641861 cycles Nov 1 00:42:18.115717 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:42:18.115732 kernel: tsc: Detected 2299.998 MHz processor Nov 1 00:42:18.115747 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:42:18.115762 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:42:18.115777 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Nov 1 00:42:18.115796 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:42:18.115810 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Nov 1 00:42:18.115825 kernel: Using GB pages for direct mapping Nov 1 00:42:18.115840 kernel: Secure boot disabled Nov 1 00:42:18.115855 kernel: ACPI: Early table checksum verification disabled Nov 1 00:42:18.115869 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Nov 1 00:42:18.115884 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Nov 1 00:42:18.115900 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Nov 1 00:42:18.115926 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Nov 1 00:42:18.115942 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Nov 1 00:42:18.115958 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Nov 1 00:42:18.115973 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Nov 1 00:42:18.115990 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Nov 1 00:42:18.116005 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Nov 1 00:42:18.116025 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Nov 1 00:42:18.116041 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Nov 1 00:42:18.116057 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Nov 1 00:42:18.116072 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Nov 1 00:42:18.116088 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Nov 1 00:42:18.116104 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Nov 1 00:42:18.116120 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Nov 1 00:42:18.116135 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Nov 1 00:42:18.116152 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Nov 1 00:42:18.116207 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Nov 1 00:42:18.116224 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Nov 1 00:42:18.116241 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 00:42:18.116256 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 00:42:18.116272 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 1 00:42:18.116288 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Nov 1 00:42:18.116304 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Nov 1 00:42:18.116320 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Nov 1 00:42:18.116336 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Nov 1 00:42:18.116356 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Nov 1 00:42:18.116372 kernel: Zone ranges: Nov 1 00:42:18.116388 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:42:18.116404 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 00:42:18.116420 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Nov 1 00:42:18.116435 kernel: Movable zone start for each node Nov 1 00:42:18.116451 kernel: Early memory node ranges Nov 1 00:42:18.116467 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Nov 1 00:42:18.116483 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Nov 1 00:42:18.116502 kernel: node 0: [mem 0x0000000000100000-0x00000000bd26afff] Nov 1 00:42:18.116518 kernel: node 0: [mem 0x00000000bd276000-0x00000000bf8ecfff] Nov 1 00:42:18.116534 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Nov 1 00:42:18.116550 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Nov 1 00:42:18.116565 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Nov 1 00:42:18.116581 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:42:18.116597 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Nov 1 00:42:18.116614 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Nov 1 00:42:18.116630 kernel: On node 0, zone DMA32: 11 pages in unavailable ranges Nov 1 00:42:18.116649 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 1 00:42:18.116672 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Nov 1 00:42:18.116688 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 1 00:42:18.116704 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:42:18.116720 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:42:18.116736 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:42:18.116752 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:42:18.116768 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:42:18.116784 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:42:18.116802 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:42:18.116817 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:42:18.116832 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 1 00:42:18.116848 kernel: Booting paravirtualized kernel on KVM Nov 1 00:42:18.116863 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:42:18.116879 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:42:18.116894 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Nov 1 00:42:18.116909 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Nov 1 00:42:18.116924 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:42:18.116942 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:42:18.116958 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:42:18.116973 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932269 Nov 1 00:42:18.116989 kernel: Policy zone: Normal Nov 1 00:42:18.117006 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:42:18.117022 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 00:42:18.117037 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 1 00:42:18.117052 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:42:18.117071 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:42:18.117087 kernel: Memory: 7515400K/7860540K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 344880K reserved, 0K cma-reserved) Nov 1 00:42:18.117103 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:42:18.117118 kernel: Kernel/User page tables isolation: enabled Nov 1 00:42:18.117133 kernel: ftrace: allocating 34614 entries in 136 pages Nov 1 00:42:18.117149 kernel: ftrace: allocated 136 pages with 2 groups Nov 1 00:42:18.117164 kernel: rcu: Hierarchical RCU implementation. Nov 1 00:42:18.117192 kernel: rcu: RCU event tracing is enabled. Nov 1 00:42:18.117209 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:42:18.117228 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:42:18.117256 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:42:18.117273 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:42:18.117292 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:42:18.117309 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 1 00:42:18.117324 kernel: Console: colour dummy device 80x25 Nov 1 00:42:18.117341 kernel: printk: console [ttyS0] enabled Nov 1 00:42:18.117357 kernel: ACPI: Core revision 20210730 Nov 1 00:42:18.117373 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:42:18.117390 kernel: x2apic enabled Nov 1 00:42:18.117409 kernel: Switched APIC routing to physical x2apic. Nov 1 00:42:18.117425 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Nov 1 00:42:18.117442 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 1 00:42:18.117458 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Nov 1 00:42:18.117474 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Nov 1 00:42:18.117491 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Nov 1 00:42:18.117507 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:42:18.117526 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 1 00:42:18.117543 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 1 00:42:18.117559 kernel: Spectre V2 : Mitigation: IBRS Nov 1 00:42:18.117575 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:42:18.117591 kernel: RETBleed: Mitigation: IBRS Nov 1 00:42:18.117607 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:42:18.117623 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Nov 1 00:42:18.117640 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Nov 1 00:42:18.117664 kernel: MDS: Mitigation: Clear CPU buffers Nov 1 00:42:18.117684 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:42:18.117701 kernel: active return thunk: its_return_thunk Nov 1 00:42:18.117717 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 00:42:18.117734 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:42:18.117750 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:42:18.117766 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:42:18.117783 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:42:18.117799 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 1 00:42:18.117815 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:42:18.117834 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:42:18.117849 kernel: LSM: Security Framework initializing Nov 1 00:42:18.117865 kernel: SELinux: Initializing. Nov 1 00:42:18.117881 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:42:18.117897 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:42:18.117913 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Nov 1 00:42:18.117930 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Nov 1 00:42:18.117947 kernel: signal: max sigframe size: 1776 Nov 1 00:42:18.117964 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:42:18.117985 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 00:42:18.118002 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:42:18.118020 kernel: x86: Booting SMP configuration: Nov 1 00:42:18.118037 kernel: .... node #0, CPUs: #1 Nov 1 00:42:18.118055 kernel: kvm-clock: cpu 1, msr 4a1a0041, secondary cpu clock Nov 1 00:42:18.118074 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 1 00:42:18.118092 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 00:42:18.118110 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:42:18.118131 kernel: smpboot: Max logical packages: 1 Nov 1 00:42:18.118148 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Nov 1 00:42:18.118165 kernel: devtmpfs: initialized Nov 1 00:42:18.118195 kernel: x86/mm: Memory block size: 128MB Nov 1 00:42:18.118211 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Nov 1 00:42:18.118255 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:42:18.118312 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:42:18.118352 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:42:18.118369 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:42:18.118391 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:42:18.118409 kernel: audit: type=2000 audit(1761957736.800:1): state=initialized audit_enabled=0 res=1 Nov 1 00:42:18.118426 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:42:18.118443 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:42:18.118461 kernel: cpuidle: using governor menu Nov 1 00:42:18.118479 kernel: ACPI: bus type PCI registered Nov 1 00:42:18.118496 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:42:18.118514 kernel: dca service started, version 1.12.1 Nov 1 00:42:18.118531 kernel: PCI: Using configuration type 1 for base access Nov 1 00:42:18.118552 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:42:18.118570 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:42:18.118587 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:42:18.118605 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:42:18.118622 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:42:18.118641 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:42:18.118665 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 00:42:18.118683 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 00:42:18.118700 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 00:42:18.118721 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 1 00:42:18.118739 kernel: ACPI: Interpreter enabled Nov 1 00:42:18.118756 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 00:42:18.118774 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:42:18.118792 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:42:18.118810 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Nov 1 00:42:18.118827 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:42:18.119065 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:42:18.124951 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Nov 1 00:42:18.124989 kernel: PCI host bridge to bus 0000:00 Nov 1 00:42:18.125166 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:42:18.125378 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:42:18.125533 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:42:18.125695 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Nov 1 00:42:18.125848 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:42:18.126043 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 1 00:42:18.126252 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Nov 1 00:42:18.126436 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 1 00:42:18.126611 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 1 00:42:18.126803 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Nov 1 00:42:18.126983 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Nov 1 00:42:18.127161 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Nov 1 00:42:18.127405 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 1 00:42:18.127582 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Nov 1 00:42:18.127788 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Nov 1 00:42:18.127968 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Nov 1 00:42:18.128144 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Nov 1 00:42:18.128725 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Nov 1 00:42:18.128890 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:42:18.128909 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:42:18.128927 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:42:18.128945 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:42:18.129092 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 1 00:42:18.129110 kernel: iommu: Default domain type: Translated Nov 1 00:42:18.129128 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:42:18.129145 kernel: vgaarb: loaded Nov 1 00:42:18.129164 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:42:18.135347 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:42:18.135367 kernel: PTP clock support registered Nov 1 00:42:18.135383 kernel: Registered efivars operations Nov 1 00:42:18.135400 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:42:18.135417 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:42:18.135434 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Nov 1 00:42:18.135452 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Nov 1 00:42:18.135470 kernel: e820: reserve RAM buffer [mem 0xbd26b000-0xbfffffff] Nov 1 00:42:18.135494 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Nov 1 00:42:18.135511 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Nov 1 00:42:18.135528 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:42:18.135547 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:42:18.135565 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:42:18.135583 kernel: pnp: PnP ACPI init Nov 1 00:42:18.135601 kernel: pnp: PnP ACPI: found 7 devices Nov 1 00:42:18.135619 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:42:18.135637 kernel: NET: Registered PF_INET protocol family Nov 1 00:42:18.135668 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 00:42:18.135686 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 1 00:42:18.135704 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:42:18.135722 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:42:18.135740 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Nov 1 00:42:18.135758 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 1 00:42:18.135776 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 1 00:42:18.135794 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 1 00:42:18.135812 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:42:18.135834 kernel: NET: Registered PF_XDP protocol family Nov 1 00:42:18.136021 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:42:18.136248 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:42:18.136409 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:42:18.136562 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Nov 1 00:42:18.136747 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 1 00:42:18.136773 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:42:18.136798 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 00:42:18.136816 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Nov 1 00:42:18.136834 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 00:42:18.136852 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 1 00:42:18.136870 kernel: clocksource: Switched to clocksource tsc Nov 1 00:42:18.136888 kernel: Initialise system trusted keyrings Nov 1 00:42:18.136905 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 1 00:42:18.136924 kernel: Key type asymmetric registered Nov 1 00:42:18.136941 kernel: Asymmetric key parser 'x509' registered Nov 1 00:42:18.136963 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 00:42:18.136981 kernel: io scheduler mq-deadline registered Nov 1 00:42:18.136999 kernel: io scheduler kyber registered Nov 1 00:42:18.137016 kernel: io scheduler bfq registered Nov 1 00:42:18.137035 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:42:18.137053 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 1 00:42:18.137253 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Nov 1 00:42:18.137277 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Nov 1 00:42:18.137450 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Nov 1 00:42:18.137479 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 1 00:42:18.137645 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Nov 1 00:42:18.137676 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:42:18.137694 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:42:18.137712 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 1 00:42:18.137730 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Nov 1 00:42:18.137748 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Nov 1 00:42:18.137927 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Nov 1 00:42:18.137957 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:42:18.137976 kernel: i8042: Warning: Keylock active Nov 1 00:42:18.137994 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:42:18.138012 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:42:18.138241 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 1 00:42:18.138405 kernel: rtc_cmos 00:00: registered as rtc0 Nov 1 00:42:18.138561 kernel: rtc_cmos 00:00: setting system clock to 2025-11-01T00:42:17 UTC (1761957737) Nov 1 00:42:18.138726 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 1 00:42:18.138754 kernel: intel_pstate: CPU model not supported Nov 1 00:42:18.138773 kernel: pstore: Registered efi as persistent store backend Nov 1 00:42:18.138791 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:42:18.138808 kernel: Segment Routing with IPv6 Nov 1 00:42:18.138826 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:42:18.138843 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:42:18.138861 kernel: Key type dns_resolver registered Nov 1 00:42:18.138878 kernel: IPI shorthand broadcast: enabled Nov 1 00:42:18.138896 kernel: sched_clock: Marking stable (785345763, 146259087)->(970314690, -38709840) Nov 1 00:42:18.138917 kernel: registered taskstats version 1 Nov 1 00:42:18.138935 kernel: Loading compiled-in X.509 certificates Nov 1 00:42:18.138953 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:42:18.138971 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: f2055682e6899ad8548fd369019e7b47939b46a0' Nov 1 00:42:18.138988 kernel: Key type .fscrypt registered Nov 1 00:42:18.139005 kernel: Key type fscrypt-provisioning registered Nov 1 00:42:18.139023 kernel: pstore: Using crash dump compression: deflate Nov 1 00:42:18.139041 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:42:18.139062 kernel: ima: No architecture policies found Nov 1 00:42:18.139080 kernel: clk: Disabling unused clocks Nov 1 00:42:18.139098 kernel: Freeing unused kernel image (initmem) memory: 47496K Nov 1 00:42:18.139116 kernel: Write protecting the kernel read-only data: 28672k Nov 1 00:42:18.139133 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Nov 1 00:42:18.139151 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Nov 1 00:42:18.139168 kernel: Run /init as init process Nov 1 00:42:18.139209 kernel: with arguments: Nov 1 00:42:18.139225 kernel: /init Nov 1 00:42:18.139242 kernel: with environment: Nov 1 00:42:18.139263 kernel: HOME=/ Nov 1 00:42:18.139280 kernel: TERM=linux Nov 1 00:42:18.139298 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 00:42:18.139319 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:42:18.139341 systemd[1]: Detected virtualization kvm. Nov 1 00:42:18.139360 systemd[1]: Detected architecture x86-64. Nov 1 00:42:18.139378 systemd[1]: Running in initrd. Nov 1 00:42:18.139399 systemd[1]: No hostname configured, using default hostname. Nov 1 00:42:18.139418 systemd[1]: Hostname set to . Nov 1 00:42:18.139437 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:42:18.139456 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:42:18.139474 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:42:18.139492 systemd[1]: Reached target cryptsetup.target. Nov 1 00:42:18.139510 systemd[1]: Reached target paths.target. Nov 1 00:42:18.139528 systemd[1]: Reached target slices.target. Nov 1 00:42:18.139550 systemd[1]: Reached target swap.target. Nov 1 00:42:18.139568 systemd[1]: Reached target timers.target. Nov 1 00:42:18.139588 systemd[1]: Listening on iscsid.socket. Nov 1 00:42:18.139606 systemd[1]: Listening on iscsiuio.socket. Nov 1 00:42:18.139625 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:42:18.139644 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:42:18.139669 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:42:18.139689 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:42:18.139711 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:42:18.139731 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:42:18.139768 systemd[1]: Reached target sockets.target. Nov 1 00:42:18.139790 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:42:18.139809 systemd[1]: Finished network-cleanup.service. Nov 1 00:42:18.139828 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:42:18.139847 systemd[1]: Starting systemd-journald.service... Nov 1 00:42:18.139869 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:42:18.139889 systemd[1]: Starting systemd-resolved.service... Nov 1 00:42:18.139908 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 00:42:18.139927 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:42:18.139947 kernel: audit: type=1130 audit(1761957738.118:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:18.139965 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:42:18.139985 kernel: audit: type=1130 audit(1761957738.128:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:18.140004 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 00:42:18.140032 systemd-journald[190]: Journal started Nov 1 00:42:18.140117 systemd-journald[190]: Runtime Journal (/run/log/journal/81268a5c19b8f7b036a03c41ad19be86) is 8.0M, max 148.8M, 140.8M free. Nov 1 00:42:18.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:18.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:18.138234 systemd-modules-load[191]: Inserted module 'overlay' Nov 1 00:42:18.155928 kernel: audit: type=1130 audit(1761957738.150:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:18.155975 systemd[1]: Started systemd-journald.service. Nov 1 00:42:18.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:18.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:18.170191 kernel: audit: type=1130 audit(1761957738.164:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:18.173448 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 00:42:18.176204 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:42:18.190679 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:42:18.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:18.195197 kernel: audit: type=1130 audit(1761957738.189:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:18.198290 systemd-resolved[192]: Positive Trust Anchors: Nov 1 00:42:18.200240 systemd-resolved[192]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:42:18.200458 systemd-resolved[192]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:42:18.208058 systemd-resolved[192]: Defaulting to hostname 'linux'. Nov 1 00:42:18.209910 systemd[1]: Started systemd-resolved.service. Nov 1 00:42:18.210088 systemd[1]: Reached target nss-lookup.target. Nov 1 00:42:18.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:18.214464 kernel: audit: type=1130 audit(1761957738.208:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:18.219693 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 00:42:18.239344 kernel: audit: type=1130 audit(1761957738.222:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:18.239381 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:42:18.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:18.224663 systemd[1]: Starting dracut-cmdline.service... Nov 1 00:42:18.241287 kernel: Bridge firewalling registered Nov 1 00:42:18.241611 systemd-modules-load[191]: Inserted module 'br_netfilter' Nov 1 00:42:18.251062 dracut-cmdline[207]: dracut-dracut-053 Nov 1 00:42:18.261361 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:42:18.333364 kernel: SCSI subsystem initialized Nov 1 00:42:18.333408 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:42:18.333434 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:42:18.333467 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 00:42:18.331679 systemd-modules-load[191]: Inserted module 'dm_multipath' Nov 1 00:42:18.333014 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:42:18.382382 kernel: audit: type=1130 audit(1761957738.342:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:18.382452 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:42:18.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:18.345018 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:42:18.431384 kernel: iscsi: registered transport (tcp) Nov 1 00:42:18.431444 kernel: audit: type=1130 audit(1761957738.402:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:18.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:18.390802 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:42:18.460326 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:42:18.460443 kernel: QLogic iSCSI HBA Driver Nov 1 00:42:18.509128 systemd[1]: Finished dracut-cmdline.service. Nov 1 00:42:18.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:18.510671 systemd[1]: Starting dracut-pre-udev.service... Nov 1 00:42:18.575279 kernel: raid6: avx2x4 gen() 22114 MB/s Nov 1 00:42:18.596220 kernel: raid6: avx2x4 xor() 6165 MB/s Nov 1 00:42:18.617214 kernel: raid6: avx2x2 gen() 22804 MB/s Nov 1 00:42:18.638211 kernel: raid6: avx2x2 xor() 18591 MB/s Nov 1 00:42:18.659215 kernel: raid6: avx2x1 gen() 20592 MB/s Nov 1 00:42:18.680203 kernel: raid6: avx2x1 xor() 16051 MB/s Nov 1 00:42:18.701208 kernel: raid6: sse2x4 gen() 10141 MB/s Nov 1 00:42:18.722210 kernel: raid6: sse2x4 xor() 6275 MB/s Nov 1 00:42:18.743206 kernel: raid6: sse2x2 gen() 10754 MB/s Nov 1 00:42:18.764214 kernel: raid6: sse2x2 xor() 7406 MB/s Nov 1 00:42:18.785209 kernel: raid6: sse2x1 gen() 9561 MB/s Nov 1 00:42:18.811218 kernel: raid6: sse2x1 xor() 5162 MB/s Nov 1 00:42:18.811290 kernel: raid6: using algorithm avx2x2 gen() 22804 MB/s Nov 1 00:42:18.811313 kernel: raid6: .... xor() 18591 MB/s, rmw enabled Nov 1 00:42:18.816380 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:42:18.843224 kernel: xor: automatically using best checksumming function avx Nov 1 00:42:18.961229 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Nov 1 00:42:18.974237 systemd[1]: Finished dracut-pre-udev.service. Nov 1 00:42:18.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:18.982000 audit: BPF prog-id=7 op=LOAD Nov 1 00:42:18.982000 audit: BPF prog-id=8 op=LOAD Nov 1 00:42:18.984224 systemd[1]: Starting systemd-udevd.service... Nov 1 00:42:19.003314 systemd-udevd[390]: Using default interface naming scheme 'v252'. Nov 1 00:42:19.010981 systemd[1]: Started systemd-udevd.service. Nov 1 00:42:19.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:19.030811 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 00:42:19.048486 dracut-pre-trigger[395]: rd.md=0: removing MD RAID activation Nov 1 00:42:19.092581 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 00:42:19.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:19.094340 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:42:19.176155 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:42:19.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:19.312201 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:42:19.367801 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:42:19.367899 kernel: AES CTR mode by8 optimization enabled Nov 1 00:42:19.373265 kernel: scsi host0: Virtio SCSI HBA Nov 1 00:42:19.394196 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Nov 1 00:42:19.457036 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Nov 1 00:42:19.515109 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Nov 1 00:42:19.515348 kernel: sd 0:0:1:0: [sda] Write Protect is off Nov 1 00:42:19.515494 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Nov 1 00:42:19.515643 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 1 00:42:19.515781 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:42:19.515798 kernel: GPT:17805311 != 33554431 Nov 1 00:42:19.515811 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:42:19.515831 kernel: GPT:17805311 != 33554431 Nov 1 00:42:19.515844 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:42:19.515858 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:42:19.515872 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Nov 1 00:42:19.571412 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 00:42:19.579214 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (434) Nov 1 00:42:19.595333 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 00:42:19.600681 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 00:42:19.625827 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 00:42:19.650392 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:42:19.661489 systemd[1]: Starting disk-uuid.service... Nov 1 00:42:19.688350 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:42:19.688646 disk-uuid[502]: Primary Header is updated. Nov 1 00:42:19.688646 disk-uuid[502]: Secondary Entries is updated. Nov 1 00:42:19.688646 disk-uuid[502]: Secondary Header is updated. Nov 1 00:42:19.715314 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:42:19.722198 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:42:20.733203 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:42:20.733363 disk-uuid[503]: The operation has completed successfully. Nov 1 00:42:20.803999 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:42:20.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:20.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:20.804134 systemd[1]: Finished disk-uuid.service. Nov 1 00:42:20.822360 systemd[1]: Starting verity-setup.service... Nov 1 00:42:20.851221 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 00:42:20.935716 systemd[1]: Found device dev-mapper-usr.device. Nov 1 00:42:20.938232 systemd[1]: Mounting sysusr-usr.mount... Nov 1 00:42:20.956657 systemd[1]: Finished verity-setup.service. Nov 1 00:42:20.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:21.046205 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 00:42:21.046929 systemd[1]: Mounted sysusr-usr.mount. Nov 1 00:42:21.047360 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 00:42:21.098351 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:42:21.098395 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:42:21.098419 kernel: BTRFS info (device sda6): has skinny extents Nov 1 00:42:21.048323 systemd[1]: Starting ignition-setup.service... Nov 1 00:42:21.111330 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:42:21.067573 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 00:42:21.147876 systemd[1]: Finished ignition-setup.service. Nov 1 00:42:21.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:21.157761 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 00:42:21.192521 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 00:42:21.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:21.192000 audit: BPF prog-id=9 op=LOAD Nov 1 00:42:21.194490 systemd[1]: Starting systemd-networkd.service... Nov 1 00:42:21.227912 systemd-networkd[677]: lo: Link UP Nov 1 00:42:21.227927 systemd-networkd[677]: lo: Gained carrier Nov 1 00:42:21.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:21.228877 systemd-networkd[677]: Enumeration completed Nov 1 00:42:21.229301 systemd-networkd[677]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:42:21.229534 systemd[1]: Started systemd-networkd.service. Nov 1 00:42:21.231659 systemd-networkd[677]: eth0: Link UP Nov 1 00:42:21.231668 systemd-networkd[677]: eth0: Gained carrier Nov 1 00:42:21.245465 systemd[1]: Reached target network.target. Nov 1 00:42:21.245552 systemd-networkd[677]: eth0: Overlong DHCP hostname received, shortened from 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762.c.flatcar-212911.internal' to 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' Nov 1 00:42:21.245571 systemd-networkd[677]: eth0: DHCPv4 address 10.128.0.16/32, gateway 10.128.0.1 acquired from 169.254.169.254 Nov 1 00:42:21.261509 systemd[1]: Starting iscsiuio.service... Nov 1 00:42:21.352598 systemd[1]: Started iscsiuio.service. Nov 1 00:42:21.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:21.361400 systemd[1]: Starting iscsid.service... Nov 1 00:42:21.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:21.389525 iscsid[686]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:42:21.389525 iscsid[686]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Nov 1 00:42:21.389525 iscsid[686]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 00:42:21.389525 iscsid[686]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 00:42:21.389525 iscsid[686]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 00:42:21.389525 iscsid[686]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:42:21.389525 iscsid[686]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 00:42:21.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:21.374708 systemd[1]: Started iscsid.service. Nov 1 00:42:21.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:21.450794 ignition[647]: Ignition 2.14.0 Nov 1 00:42:21.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:21.384301 systemd[1]: Starting dracut-initqueue.service... Nov 1 00:42:21.450811 ignition[647]: Stage: fetch-offline Nov 1 00:42:21.405477 systemd[1]: Finished dracut-initqueue.service. Nov 1 00:42:21.450900 ignition[647]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:42:21.470934 systemd[1]: Reached target remote-fs-pre.target. Nov 1 00:42:21.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:21.450953 ignition[647]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Nov 1 00:42:21.471038 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:42:21.478252 ignition[647]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:42:21.496439 systemd[1]: Reached target remote-fs.target. Nov 1 00:42:21.478521 ignition[647]: parsed url from cmdline: "" Nov 1 00:42:21.497910 systemd[1]: Starting dracut-pre-mount.service... Nov 1 00:42:21.478528 ignition[647]: no config URL provided Nov 1 00:42:21.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:21.514968 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 00:42:21.478536 ignition[647]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:42:21.539999 systemd[1]: Finished dracut-pre-mount.service. Nov 1 00:42:21.478550 ignition[647]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:42:21.556198 systemd[1]: Starting ignition-fetch.service... Nov 1 00:42:21.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:21.478561 ignition[647]: failed to fetch config: resource requires networking Nov 1 00:42:21.593436 unknown[701]: fetched base config from "system" Nov 1 00:42:21.478757 ignition[647]: Ignition finished successfully Nov 1 00:42:21.593450 unknown[701]: fetched base config from "system" Nov 1 00:42:21.569210 ignition[701]: Ignition 2.14.0 Nov 1 00:42:21.593462 unknown[701]: fetched user config from "gcp" Nov 1 00:42:21.569224 ignition[701]: Stage: fetch Nov 1 00:42:21.596485 systemd[1]: Finished ignition-fetch.service. Nov 1 00:42:21.569451 ignition[701]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:42:21.611078 systemd[1]: Starting ignition-kargs.service... Nov 1 00:42:21.569502 ignition[701]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Nov 1 00:42:21.651024 systemd[1]: Finished ignition-kargs.service. Nov 1 00:42:21.578550 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:42:21.668524 systemd[1]: Starting ignition-disks.service... Nov 1 00:42:21.578769 ignition[701]: parsed url from cmdline: "" Nov 1 00:42:21.705982 systemd[1]: Finished ignition-disks.service. Nov 1 00:42:21.578775 ignition[701]: no config URL provided Nov 1 00:42:21.721782 systemd[1]: Reached target initrd-root-device.target. Nov 1 00:42:21.578782 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:42:21.736563 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:42:21.578794 ignition[701]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:42:21.755556 systemd[1]: Reached target local-fs.target. Nov 1 00:42:21.578952 ignition[701]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Nov 1 00:42:21.770539 systemd[1]: Reached target sysinit.target. Nov 1 00:42:21.585207 ignition[701]: GET result: OK Nov 1 00:42:21.790590 systemd[1]: Reached target basic.target. Nov 1 00:42:21.585353 ignition[701]: parsing config with SHA512: ef44b489ee0e384f520b871a69980f49ae6034c28068a82f5c90a979eb65c7ea974d5b437633580f0db05c2a063e479692175f891c5c02f5e04733b44f20d540 Nov 1 00:42:21.811161 systemd[1]: Starting systemd-fsck-root.service... Nov 1 00:42:21.594525 ignition[701]: fetch: fetch complete Nov 1 00:42:21.594534 ignition[701]: fetch: fetch passed Nov 1 00:42:21.594595 ignition[701]: Ignition finished successfully Nov 1 00:42:21.627147 ignition[707]: Ignition 2.14.0 Nov 1 00:42:21.627158 ignition[707]: Stage: kargs Nov 1 00:42:21.627368 ignition[707]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:42:21.627403 ignition[707]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Nov 1 00:42:21.636693 ignition[707]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:42:21.638440 ignition[707]: kargs: kargs passed Nov 1 00:42:21.638510 ignition[707]: Ignition finished successfully Nov 1 00:42:21.683460 ignition[713]: Ignition 2.14.0 Nov 1 00:42:21.683484 ignition[713]: Stage: disks Nov 1 00:42:21.683652 ignition[713]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:42:21.683684 ignition[713]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Nov 1 00:42:21.690801 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:42:21.692595 ignition[713]: disks: disks passed Nov 1 00:42:21.692687 ignition[713]: Ignition finished successfully Nov 1 00:42:21.854826 systemd-fsck[721]: ROOT: clean, 637/1628000 files, 124069/1617920 blocks Nov 1 00:42:22.025430 systemd[1]: Finished systemd-fsck-root.service. Nov 1 00:42:22.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:22.040883 systemd[1]: Mounting sysroot.mount... Nov 1 00:42:22.070211 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 00:42:22.070852 systemd[1]: Mounted sysroot.mount. Nov 1 00:42:22.071261 systemd[1]: Reached target initrd-root-fs.target. Nov 1 00:42:22.093304 systemd[1]: Mounting sysroot-usr.mount... Nov 1 00:42:22.106045 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Nov 1 00:42:22.106118 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:42:22.106160 systemd[1]: Reached target ignition-diskful.target. Nov 1 00:42:22.185988 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (727) Nov 1 00:42:22.186036 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:42:22.186059 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:42:22.122116 systemd[1]: Mounted sysroot-usr.mount. Nov 1 00:42:22.216344 kernel: BTRFS info (device sda6): has skinny extents Nov 1 00:42:22.216393 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:42:22.148991 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:42:22.225383 initrd-setup-root[732]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:42:22.165070 systemd[1]: Starting initrd-setup-root.service... Nov 1 00:42:22.249395 initrd-setup-root[756]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:42:22.222559 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:42:22.269365 initrd-setup-root[764]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:42:22.279346 initrd-setup-root[774]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:42:22.304888 systemd[1]: Finished initrd-setup-root.service. Nov 1 00:42:22.344578 kernel: kauditd_printk_skb: 23 callbacks suppressed Nov 1 00:42:22.344632 kernel: audit: type=1130 audit(1761957742.304:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:22.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:22.306745 systemd[1]: Starting ignition-mount.service... Nov 1 00:42:22.353823 systemd[1]: Starting sysroot-boot.service... Nov 1 00:42:22.367689 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Nov 1 00:42:22.367852 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Nov 1 00:42:22.393375 ignition[792]: INFO : Ignition 2.14.0 Nov 1 00:42:22.393375 ignition[792]: INFO : Stage: mount Nov 1 00:42:22.393375 ignition[792]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:42:22.393375 ignition[792]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Nov 1 00:42:22.466450 kernel: audit: type=1130 audit(1761957742.426:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:22.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:22.403806 systemd[1]: Finished sysroot-boot.service. Nov 1 00:42:22.501443 kernel: audit: type=1130 audit(1761957742.473:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:22.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:22.501573 ignition[792]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:42:22.501573 ignition[792]: INFO : mount: mount passed Nov 1 00:42:22.501573 ignition[792]: INFO : Ignition finished successfully Nov 1 00:42:22.549548 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (802) Nov 1 00:42:22.549598 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:42:22.549622 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:42:22.428122 systemd[1]: Finished ignition-mount.service. Nov 1 00:42:22.579564 kernel: BTRFS info (device sda6): has skinny extents Nov 1 00:42:22.579607 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 1 00:42:22.476421 systemd[1]: Starting ignition-files.service... Nov 1 00:42:22.513334 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:42:22.577152 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:42:22.612429 ignition[821]: INFO : Ignition 2.14.0 Nov 1 00:42:22.612429 ignition[821]: INFO : Stage: files Nov 1 00:42:22.612429 ignition[821]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:42:22.612429 ignition[821]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Nov 1 00:42:22.612429 ignition[821]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:42:22.635105 unknown[821]: wrote ssh authorized keys file for user: core Nov 1 00:42:22.678365 ignition[821]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:42:22.678365 ignition[821]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:42:22.678365 ignition[821]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:42:22.678365 ignition[821]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:42:22.678365 ignition[821]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:42:22.678365 ignition[821]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:42:22.678365 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:42:22.678365 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:42:22.678365 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:42:22.678365 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 00:42:22.884682 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 00:42:23.185018 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:42:23.202401 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/hosts" Nov 1 00:42:23.202401 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(5): oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:42:23.202401 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(6): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1547700636" Nov 1 00:42:23.202401 ignition[821]: CRITICAL : files: createFilesystemsFiles: createFiles: op(5): op(6): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1547700636": device or resource busy Nov 1 00:42:23.202401 ignition[821]: ERROR : files: createFilesystemsFiles: createFiles: op(5): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1547700636", trying btrfs: device or resource busy Nov 1 00:42:23.202401 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1547700636" Nov 1 00:42:23.202401 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1547700636" Nov 1 00:42:23.202401 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [started] unmounting "/mnt/oem1547700636" Nov 1 00:42:23.202401 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [finished] unmounting "/mnt/oem1547700636" Nov 1 00:42:23.202401 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/hosts" Nov 1 00:42:23.202401 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:42:23.202401 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:42:23.202401 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Nov 1 00:42:23.202401 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:42:23.185594 systemd-networkd[677]: eth0: Gained IPv6LL Nov 1 00:42:23.438419 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem498029299" Nov 1 00:42:23.438419 ignition[821]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem498029299": device or resource busy Nov 1 00:42:23.438419 ignition[821]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem498029299", trying btrfs: device or resource busy Nov 1 00:42:23.438419 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem498029299" Nov 1 00:42:23.438419 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem498029299" Nov 1 00:42:23.438419 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem498029299" Nov 1 00:42:23.438419 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem498029299" Nov 1 00:42:23.438419 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Nov 1 00:42:23.438419 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:42:23.438419 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:42:23.438419 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:42:23.438419 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:42:23.438419 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:42:23.438419 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:42:23.438419 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:42:23.203103 systemd[1]: mnt-oem1547700636.mount: Deactivated successfully. Nov 1 00:42:23.692475 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:42:23.692475 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:42:23.692475 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:42:23.692475 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Nov 1 00:42:23.692475 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:42:23.692475 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3608777579" Nov 1 00:42:23.692475 ignition[821]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3608777579": device or resource busy Nov 1 00:42:23.692475 ignition[821]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3608777579", trying btrfs: device or resource busy Nov 1 00:42:23.692475 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3608777579" Nov 1 00:42:23.692475 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3608777579" Nov 1 00:42:23.692475 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem3608777579" Nov 1 00:42:23.692475 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem3608777579" Nov 1 00:42:23.692475 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Nov 1 00:42:23.224438 systemd[1]: mnt-oem498029299.mount: Deactivated successfully. Nov 1 00:42:23.936434 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:42:23.936434 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 00:42:23.936434 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET result: OK Nov 1 00:42:24.805163 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:42:24.824399 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Nov 1 00:42:24.824399 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(18): oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:42:24.824399 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(19): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2479463225" Nov 1 00:42:24.824399 ignition[821]: CRITICAL : files: createFilesystemsFiles: createFiles: op(18): op(19): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2479463225": device or resource busy Nov 1 00:42:24.824399 ignition[821]: ERROR : files: createFilesystemsFiles: createFiles: op(18): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2479463225", trying btrfs: device or resource busy Nov 1 00:42:24.824399 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2479463225" Nov 1 00:42:24.971451 kernel: audit: type=1130 audit(1761957744.866:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:24.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:24.971646 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2479463225" Nov 1 00:42:24.971646 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [started] unmounting "/mnt/oem2479463225" Nov 1 00:42:24.971646 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [finished] unmounting "/mnt/oem2479463225" Nov 1 00:42:24.971646 ignition[821]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Nov 1 00:42:24.971646 ignition[821]: INFO : files: op(1c): [started] processing unit "coreos-metadata-sshkeys@.service" Nov 1 00:42:24.971646 ignition[821]: INFO : files: op(1c): [finished] processing unit "coreos-metadata-sshkeys@.service" Nov 1 00:42:24.971646 ignition[821]: INFO : files: op(1d): [started] processing unit "oem-gce.service" Nov 1 00:42:24.971646 ignition[821]: INFO : files: op(1d): [finished] processing unit "oem-gce.service" Nov 1 00:42:24.971646 ignition[821]: INFO : files: op(1e): [started] processing unit "oem-gce-enable-oslogin.service" Nov 1 00:42:24.971646 ignition[821]: INFO : files: op(1e): [finished] processing unit "oem-gce-enable-oslogin.service" Nov 1 00:42:24.971646 ignition[821]: INFO : files: op(1f): [started] processing unit "containerd.service" Nov 1 00:42:24.971646 ignition[821]: INFO : files: op(1f): op(20): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:42:24.971646 ignition[821]: INFO : files: op(1f): op(20): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:42:24.971646 ignition[821]: INFO : files: op(1f): [finished] processing unit "containerd.service" Nov 1 00:42:24.971646 ignition[821]: INFO : files: op(21): [started] processing unit "prepare-helm.service" Nov 1 00:42:24.971646 ignition[821]: INFO : files: op(21): op(22): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:42:24.971646 ignition[821]: INFO : files: op(21): op(22): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:42:24.971646 ignition[821]: INFO : files: op(21): [finished] processing unit "prepare-helm.service" Nov 1 00:42:24.971646 ignition[821]: INFO : files: op(23): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 00:42:25.430481 kernel: audit: type=1130 audit(1761957744.980:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.430553 kernel: audit: type=1130 audit(1761957745.039:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.430578 kernel: audit: type=1131 audit(1761957745.039:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.430614 kernel: audit: type=1130 audit(1761957745.178:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.430630 kernel: audit: type=1131 audit(1761957745.178:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.430645 kernel: audit: type=1130 audit(1761957745.329:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:24.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:24.831765 systemd[1]: mnt-oem2479463225.mount: Deactivated successfully. Nov 1 00:42:25.446654 ignition[821]: INFO : files: op(23): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 00:42:25.446654 ignition[821]: INFO : files: op(24): [started] setting preset to enabled for "oem-gce.service" Nov 1 00:42:25.446654 ignition[821]: INFO : files: op(24): [finished] setting preset to enabled for "oem-gce.service" Nov 1 00:42:25.446654 ignition[821]: INFO : files: op(25): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Nov 1 00:42:25.446654 ignition[821]: INFO : files: op(25): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Nov 1 00:42:25.446654 ignition[821]: INFO : files: op(26): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:42:25.446654 ignition[821]: INFO : files: op(26): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:42:25.446654 ignition[821]: INFO : files: createResultFile: createFiles: op(27): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:42:25.446654 ignition[821]: INFO : files: createResultFile: createFiles: op(27): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:42:25.446654 ignition[821]: INFO : files: files passed Nov 1 00:42:25.446654 ignition[821]: INFO : Ignition finished successfully Nov 1 00:42:25.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:24.844873 systemd[1]: Finished ignition-files.service. Nov 1 00:42:25.634398 initrd-setup-root-after-ignition[844]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:42:24.879483 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 00:42:24.904582 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 00:42:24.906002 systemd[1]: Starting ignition-quench.service... Nov 1 00:42:24.954986 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 00:42:24.982089 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:42:24.982295 systemd[1]: Finished ignition-quench.service. Nov 1 00:42:25.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.040765 systemd[1]: Reached target ignition-complete.target. Nov 1 00:42:25.119002 systemd[1]: Starting initrd-parse-etc.service... Nov 1 00:42:25.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.155558 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:42:25.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.155721 systemd[1]: Finished initrd-parse-etc.service. Nov 1 00:42:25.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.179887 systemd[1]: Reached target initrd-fs.target. Nov 1 00:42:25.232707 systemd[1]: Reached target initrd.target. Nov 1 00:42:25.268721 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 00:42:25.850565 ignition[859]: INFO : Ignition 2.14.0 Nov 1 00:42:25.850565 ignition[859]: INFO : Stage: umount Nov 1 00:42:25.850565 ignition[859]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:42:25.850565 ignition[859]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Nov 1 00:42:25.850565 ignition[859]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 1 00:42:25.850565 ignition[859]: INFO : umount: umount passed Nov 1 00:42:25.850565 ignition[859]: INFO : Ignition finished successfully Nov 1 00:42:25.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.270327 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 00:42:25.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.966717 iscsid[686]: iscsid shutting down. Nov 1 00:42:25.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.288943 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 00:42:25.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.332443 systemd[1]: Starting initrd-cleanup.service... Nov 1 00:42:26.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.392123 systemd[1]: Stopped target nss-lookup.target. Nov 1 00:42:26.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.416808 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 00:42:26.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.438769 systemd[1]: Stopped target timers.target. Nov 1 00:42:26.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.453752 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:42:26.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.453981 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 00:42:26.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.485995 systemd[1]: Stopped target initrd.target. Nov 1 00:42:25.518843 systemd[1]: Stopped target basic.target. Nov 1 00:42:25.550899 systemd[1]: Stopped target ignition-complete.target. Nov 1 00:42:25.563781 systemd[1]: Stopped target ignition-diskful.target. Nov 1 00:42:25.605762 systemd[1]: Stopped target initrd-root-device.target. Nov 1 00:42:25.627756 systemd[1]: Stopped target remote-fs.target. Nov 1 00:42:26.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.642703 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 00:42:26.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.664768 systemd[1]: Stopped target sysinit.target. Nov 1 00:42:25.682754 systemd[1]: Stopped target local-fs.target. Nov 1 00:42:26.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.707782 systemd[1]: Stopped target local-fs-pre.target. Nov 1 00:42:26.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:26.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.722769 systemd[1]: Stopped target swap.target. Nov 1 00:42:25.737706 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:42:25.737924 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 00:42:25.754917 systemd[1]: Stopped target cryptsetup.target. Nov 1 00:42:25.769687 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:42:26.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.769908 systemd[1]: Stopped dracut-initqueue.service. Nov 1 00:42:26.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:26.319000 audit: BPF prog-id=6 op=UNLOAD Nov 1 00:42:25.785891 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:42:25.786098 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 00:42:25.803840 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:42:26.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.804042 systemd[1]: Stopped ignition-files.service. Nov 1 00:42:26.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.821554 systemd[1]: Stopping ignition-mount.service... Nov 1 00:42:26.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.857878 systemd[1]: Stopping iscsid.service... Nov 1 00:42:25.875449 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:42:26.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.875803 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 00:42:25.897414 systemd[1]: Stopping sysroot-boot.service... Nov 1 00:42:25.927592 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:42:25.927882 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 00:42:26.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.957936 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:42:26.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.958158 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 00:42:26.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.979262 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:42:25.980461 systemd[1]: iscsid.service: Deactivated successfully. Nov 1 00:42:25.980596 systemd[1]: Stopped iscsid.service. Nov 1 00:42:26.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.990343 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:42:26.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.990469 systemd[1]: Stopped ignition-mount.service. Nov 1 00:42:26.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:26.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:26.005330 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:42:26.005468 systemd[1]: Stopped sysroot-boot.service. Nov 1 00:42:26.019447 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:42:26.634000 audit: BPF prog-id=8 op=UNLOAD Nov 1 00:42:26.634000 audit: BPF prog-id=7 op=UNLOAD Nov 1 00:42:26.019620 systemd[1]: Stopped ignition-disks.service. Nov 1 00:42:26.635000 audit: BPF prog-id=5 op=UNLOAD Nov 1 00:42:26.636000 audit: BPF prog-id=4 op=UNLOAD Nov 1 00:42:26.636000 audit: BPF prog-id=3 op=UNLOAD Nov 1 00:42:26.034555 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:42:26.034650 systemd[1]: Stopped ignition-kargs.service. Nov 1 00:42:26.666395 systemd-journald[190]: Received SIGTERM from PID 1 (n/a). Nov 1 00:42:26.050576 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:42:26.050663 systemd[1]: Stopped ignition-fetch.service. Nov 1 00:42:26.065686 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:42:26.065782 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 00:42:26.083668 systemd[1]: Stopped target paths.target. Nov 1 00:42:26.097524 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:42:26.101313 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 00:42:26.112409 systemd[1]: Stopped target slices.target. Nov 1 00:42:26.125406 systemd[1]: Stopped target sockets.target. Nov 1 00:42:26.139549 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:42:26.139653 systemd[1]: Closed iscsid.socket. Nov 1 00:42:26.154485 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:42:26.154595 systemd[1]: Stopped ignition-setup.service. Nov 1 00:42:26.170565 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:42:26.170659 systemd[1]: Stopped initrd-setup-root.service. Nov 1 00:42:26.185699 systemd[1]: Stopping iscsiuio.service... Nov 1 00:42:26.200922 systemd[1]: iscsiuio.service: Deactivated successfully. Nov 1 00:42:26.201066 systemd[1]: Stopped iscsiuio.service. Nov 1 00:42:26.215043 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:42:26.215252 systemd[1]: Finished initrd-cleanup.service. Nov 1 00:42:26.230924 systemd[1]: Stopped target network.target. Nov 1 00:42:26.244614 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:42:26.244712 systemd[1]: Closed iscsiuio.socket. Nov 1 00:42:26.258744 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:42:26.262283 systemd-networkd[677]: eth0: DHCPv6 lease lost Nov 1 00:42:26.274771 systemd[1]: Stopping systemd-resolved.service... Nov 1 00:42:26.289886 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:42:26.290033 systemd[1]: Stopped systemd-resolved.service. Nov 1 00:42:26.305367 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:42:26.305532 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:42:26.321278 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:42:26.321331 systemd[1]: Closed systemd-networkd.socket. Nov 1 00:42:26.336750 systemd[1]: Stopping network-cleanup.service... Nov 1 00:42:26.352452 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:42:26.352606 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 00:42:26.369607 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:42:26.369700 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:42:26.385683 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:42:26.385766 systemd[1]: Stopped systemd-modules-load.service. Nov 1 00:42:26.400705 systemd[1]: Stopping systemd-udevd.service... Nov 1 00:42:26.417190 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:42:26.417961 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:42:26.418130 systemd[1]: Stopped systemd-udevd.service. Nov 1 00:42:26.424536 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:42:26.424640 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 00:42:26.453566 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:42:26.453640 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 00:42:26.468584 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:42:26.468683 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 00:42:26.486679 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:42:26.486784 systemd[1]: Stopped dracut-cmdline.service. Nov 1 00:42:26.502728 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:42:26.502829 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 00:42:26.522109 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 00:42:26.544477 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:42:26.544605 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 00:42:26.561302 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:42:26.561470 systemd[1]: Stopped network-cleanup.service. Nov 1 00:42:26.575911 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:42:26.576044 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 00:42:26.591833 systemd[1]: Reached target initrd-switch-root.target. Nov 1 00:42:26.607751 systemd[1]: Starting initrd-switch-root.service... Nov 1 00:42:26.631036 systemd[1]: Switching root. Nov 1 00:42:26.678251 systemd-journald[190]: Journal stopped Nov 1 00:42:31.508157 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 00:42:31.508299 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 00:42:31.508333 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 00:42:31.508362 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:42:31.508390 kernel: SELinux: policy capability open_perms=1 Nov 1 00:42:31.508419 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:42:31.508441 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:42:31.508489 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:42:31.508512 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:42:31.508539 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:42:31.508563 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:42:31.508595 systemd[1]: Successfully loaded SELinux policy in 115.703ms. Nov 1 00:42:31.508642 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.794ms. Nov 1 00:42:31.508671 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:42:31.508706 systemd[1]: Detected virtualization kvm. Nov 1 00:42:31.508730 systemd[1]: Detected architecture x86-64. Nov 1 00:42:31.508752 systemd[1]: Detected first boot. Nov 1 00:42:31.508777 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:42:31.509885 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 00:42:31.509930 kernel: kauditd_printk_skb: 42 callbacks suppressed Nov 1 00:42:31.509963 kernel: audit: type=1400 audit(1761957747.441:86): avc: denied { associate } for pid=909 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 00:42:31.509998 kernel: audit: type=1300 audit(1761957747.441:86): arch=c000003e syscall=188 success=yes exit=0 a0=c00014f672 a1=c0000d0ae0 a2=c0000d8a00 a3=32 items=0 ppid=892 pid=909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:31.510024 kernel: audit: type=1327 audit(1761957747.441:86): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:42:31.510047 kernel: audit: type=1400 audit(1761957747.451:87): avc: denied { associate } for pid=909 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Nov 1 00:42:31.510072 kernel: audit: type=1300 audit(1761957747.451:87): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014f749 a2=1ed a3=0 items=2 ppid=892 pid=909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:31.510094 kernel: audit: type=1307 audit(1761957747.451:87): cwd="/" Nov 1 00:42:31.510122 kernel: audit: type=1302 audit(1761957747.451:87): item=0 name=(null) inode=2 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:31.510145 kernel: audit: type=1302 audit(1761957747.451:87): item=1 name=(null) inode=3 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:31.510168 kernel: audit: type=1327 audit(1761957747.451:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:42:31.510208 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:42:31.510236 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:42:31.510261 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:42:31.510288 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:42:31.510317 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:42:31.510341 systemd[1]: Unnecessary job was removed for dev-sda6.device. Nov 1 00:42:31.510367 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 00:42:31.510391 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 00:42:31.510417 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Nov 1 00:42:31.510451 systemd[1]: Created slice system-getty.slice. Nov 1 00:42:31.510482 systemd[1]: Created slice system-modprobe.slice. Nov 1 00:42:31.510506 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 00:42:31.510535 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 00:42:31.510562 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 00:42:31.510586 systemd[1]: Created slice user.slice. Nov 1 00:42:31.510610 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:42:31.510633 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 00:42:31.510658 systemd[1]: Set up automount boot.automount. Nov 1 00:42:31.510681 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 00:42:31.510713 systemd[1]: Reached target integritysetup.target. Nov 1 00:42:31.510741 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:42:31.510765 systemd[1]: Reached target remote-fs.target. Nov 1 00:42:31.510788 systemd[1]: Reached target slices.target. Nov 1 00:42:31.510812 systemd[1]: Reached target swap.target. Nov 1 00:42:31.510836 systemd[1]: Reached target torcx.target. Nov 1 00:42:31.510860 systemd[1]: Reached target veritysetup.target. Nov 1 00:42:31.510884 systemd[1]: Listening on systemd-coredump.socket. Nov 1 00:42:31.510907 systemd[1]: Listening on systemd-initctl.socket. Nov 1 00:42:31.510931 kernel: audit: type=1400 audit(1761957751.064:88): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:42:31.510959 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:42:31.510982 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:42:31.511007 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:42:31.511031 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:42:31.511055 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:42:31.511081 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:42:31.511104 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 00:42:31.511129 systemd[1]: Mounting dev-hugepages.mount... Nov 1 00:42:31.511156 systemd[1]: Mounting dev-mqueue.mount... Nov 1 00:42:31.511194 systemd[1]: Mounting media.mount... Nov 1 00:42:31.511223 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:31.511247 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 00:42:31.511272 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 00:42:31.511295 systemd[1]: Mounting tmp.mount... Nov 1 00:42:31.511319 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 00:42:31.511343 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:42:31.511367 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:42:31.511391 systemd[1]: Starting modprobe@configfs.service... Nov 1 00:42:31.511415 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:42:31.511442 systemd[1]: Starting modprobe@drm.service... Nov 1 00:42:31.511466 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:42:31.511491 systemd[1]: Starting modprobe@fuse.service... Nov 1 00:42:31.511514 systemd[1]: Starting modprobe@loop.service... Nov 1 00:42:31.511537 kernel: fuse: init (API version 7.34) Nov 1 00:42:31.511561 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:42:31.511585 kernel: loop: module loaded Nov 1 00:42:31.511610 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 1 00:42:31.511633 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Nov 1 00:42:31.511661 systemd[1]: Starting systemd-journald.service... Nov 1 00:42:31.511685 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:42:31.511716 systemd[1]: Starting systemd-network-generator.service... Nov 1 00:42:31.511745 systemd-journald[1020]: Journal started Nov 1 00:42:31.511839 systemd-journald[1020]: Runtime Journal (/run/log/journal/81268a5c19b8f7b036a03c41ad19be86) is 8.0M, max 148.8M, 140.8M free. Nov 1 00:42:31.064000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:42:31.064000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Nov 1 00:42:31.504000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 00:42:31.504000 audit[1020]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd4b7c5b30 a2=4000 a3=7ffd4b7c5bcc items=0 ppid=1 pid=1020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:31.504000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 00:42:31.534231 systemd[1]: Starting systemd-remount-fs.service... Nov 1 00:42:31.549220 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:42:31.569208 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:31.579232 systemd[1]: Started systemd-journald.service. Nov 1 00:42:31.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:31.589855 systemd[1]: Mounted dev-hugepages.mount. Nov 1 00:42:31.597671 systemd[1]: Mounted dev-mqueue.mount. Nov 1 00:42:31.604713 systemd[1]: Mounted media.mount. Nov 1 00:42:31.611688 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 00:42:31.621677 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 00:42:31.630623 systemd[1]: Mounted tmp.mount. Nov 1 00:42:31.638212 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 00:42:31.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:31.647062 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:42:31.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:31.655969 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:42:31.656389 systemd[1]: Finished modprobe@configfs.service. Nov 1 00:42:31.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:31.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:31.665945 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:42:31.666318 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:42:31.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:31.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:31.674889 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:42:31.675204 systemd[1]: Finished modprobe@drm.service. Nov 1 00:42:31.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:31.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:31.683867 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:42:31.684148 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:42:31.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:31.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:31.692926 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:42:31.693252 systemd[1]: Finished modprobe@fuse.service. Nov 1 00:42:31.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:31.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:31.702917 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:42:31.703356 systemd[1]: Finished modprobe@loop.service. Nov 1 00:42:31.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:31.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:31.711999 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:42:31.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:31.721031 systemd[1]: Finished systemd-network-generator.service. Nov 1 00:42:31.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:31.730922 systemd[1]: Finished systemd-remount-fs.service. Nov 1 00:42:31.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:31.739964 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:42:31.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:31.749071 systemd[1]: Reached target network-pre.target. Nov 1 00:42:31.759362 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 00:42:31.770736 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 00:42:31.778437 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:42:31.782945 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 00:42:31.792856 systemd[1]: Starting systemd-journal-flush.service... Nov 1 00:42:31.801426 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:42:31.803908 systemd[1]: Starting systemd-random-seed.service... Nov 1 00:42:31.811428 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:42:31.813985 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:42:31.819858 systemd-journald[1020]: Time spent on flushing to /var/log/journal/81268a5c19b8f7b036a03c41ad19be86 is 77.568ms for 1098 entries. Nov 1 00:42:31.819858 systemd-journald[1020]: System Journal (/var/log/journal/81268a5c19b8f7b036a03c41ad19be86) is 8.0M, max 584.8M, 576.8M free. Nov 1 00:42:31.939576 systemd-journald[1020]: Received client request to flush runtime journal. Nov 1 00:42:31.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:31.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:31.830901 systemd[1]: Starting systemd-sysusers.service... Nov 1 00:42:31.842535 systemd[1]: Starting systemd-udev-settle.service... Nov 1 00:42:31.853978 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 00:42:31.941749 udevadm[1042]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 1 00:42:31.862606 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 00:42:31.871968 systemd[1]: Finished systemd-random-seed.service. Nov 1 00:42:31.880657 systemd[1]: Reached target first-boot-complete.target. Nov 1 00:42:31.889925 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:42:31.937406 systemd[1]: Finished systemd-sysusers.service. Nov 1 00:42:31.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:31.947220 systemd[1]: Finished systemd-journal-flush.service. Nov 1 00:42:31.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:31.959279 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:42:32.020061 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:42:32.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:32.586799 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 00:42:32.600961 kernel: kauditd_printk_skb: 28 callbacks suppressed Nov 1 00:42:32.601105 kernel: audit: type=1130 audit(1761957752.594:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:32.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:32.598104 systemd[1]: Starting systemd-udevd.service... Nov 1 00:42:32.649228 systemd-udevd[1052]: Using default interface naming scheme 'v252'. Nov 1 00:42:32.707353 systemd[1]: Started systemd-udevd.service. Nov 1 00:42:32.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:32.738364 kernel: audit: type=1130 audit(1761957752.714:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:32.720736 systemd[1]: Starting systemd-networkd.service... Nov 1 00:42:32.752697 systemd[1]: Starting systemd-userdbd.service... Nov 1 00:42:32.818114 systemd[1]: Found device dev-ttyS0.device. Nov 1 00:42:32.833486 systemd[1]: Started systemd-userdbd.service. Nov 1 00:42:32.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:32.867209 kernel: audit: type=1130 audit(1761957752.841:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:32.942224 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 1 00:42:33.006253 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:42:33.019778 systemd-networkd[1062]: lo: Link UP Nov 1 00:42:33.019794 systemd-networkd[1062]: lo: Gained carrier Nov 1 00:42:33.020741 systemd-networkd[1062]: Enumeration completed Nov 1 00:42:33.020989 systemd[1]: Started systemd-networkd.service. Nov 1 00:42:33.022270 systemd-networkd[1062]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:42:33.025633 systemd-networkd[1062]: eth0: Link UP Nov 1 00:42:33.025791 systemd-networkd[1062]: eth0: Gained carrier Nov 1 00:42:33.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:33.052279 kernel: audit: type=1130 audit(1761957753.028:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:33.065216 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Nov 1 00:42:33.069039 systemd-networkd[1062]: eth0: Overlong DHCP hostname received, shortened from 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762.c.flatcar-212911.internal' to 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' Nov 1 00:42:33.069074 systemd-networkd[1062]: eth0: DHCPv4 address 10.128.0.16/32, gateway 10.128.0.1 acquired from 169.254.169.254 Nov 1 00:42:33.066000 audit[1059]: AVC avc: denied { confidentiality } for pid=1059 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:42:33.103155 kernel: audit: type=1400 audit(1761957753.066:119): avc: denied { confidentiality } for pid=1059 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:42:33.103346 kernel: ACPI: button: Sleep Button [SLPF] Nov 1 00:42:33.066000 audit[1059]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c9f5cce3f0 a1=338ec a2=7ffbfc2f5bc5 a3=5 items=110 ppid=1052 pid=1059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:33.142263 kernel: audit: type=1300 audit(1761957753.066:119): arch=c000003e syscall=175 success=yes exit=0 a0=55c9f5cce3f0 a1=338ec a2=7ffbfc2f5bc5 a3=5 items=110 ppid=1052 pid=1059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:33.066000 audit: CWD cwd="/" Nov 1 00:42:33.066000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.181441 kernel: audit: type=1307 audit(1761957753.066:119): cwd="/" Nov 1 00:42:33.181625 kernel: audit: type=1302 audit(1761957753.066:119): item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=1 name=(null) inode=14749 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.212439 kernel: audit: type=1302 audit(1761957753.066:119): item=1 name=(null) inode=14749 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=2 name=(null) inode=14749 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.241032 kernel: audit: type=1302 audit(1761957753.066:119): item=2 name=(null) inode=14749 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=3 name=(null) inode=14750 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=4 name=(null) inode=14749 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=5 name=(null) inode=14751 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=6 name=(null) inode=14749 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=7 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=8 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=9 name=(null) inode=14753 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=10 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=11 name=(null) inode=14754 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=12 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=13 name=(null) inode=14755 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=14 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=15 name=(null) inode=14756 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=16 name=(null) inode=14752 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=17 name=(null) inode=14757 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=18 name=(null) inode=14749 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=19 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=20 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=21 name=(null) inode=14759 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=22 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=23 name=(null) inode=14760 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=24 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=25 name=(null) inode=14761 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=26 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=27 name=(null) inode=14762 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=28 name=(null) inode=14758 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=29 name=(null) inode=14763 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=30 name=(null) inode=14749 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=31 name=(null) inode=14764 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=32 name=(null) inode=14764 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=33 name=(null) inode=14765 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=34 name=(null) inode=14764 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=35 name=(null) inode=14766 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=36 name=(null) inode=14764 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=37 name=(null) inode=14767 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=38 name=(null) inode=14764 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=39 name=(null) inode=14768 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=40 name=(null) inode=14764 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=41 name=(null) inode=14769 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=42 name=(null) inode=14749 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=43 name=(null) inode=14770 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=44 name=(null) inode=14770 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=45 name=(null) inode=14771 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=46 name=(null) inode=14770 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=47 name=(null) inode=14772 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=48 name=(null) inode=14770 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=49 name=(null) inode=14773 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=50 name=(null) inode=14770 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=51 name=(null) inode=14774 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=52 name=(null) inode=14770 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=53 name=(null) inode=14775 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=55 name=(null) inode=14776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=56 name=(null) inode=14776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=57 name=(null) inode=14777 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=58 name=(null) inode=14776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=59 name=(null) inode=14778 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=60 name=(null) inode=14776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=61 name=(null) inode=14779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=62 name=(null) inode=14779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=63 name=(null) inode=14780 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=64 name=(null) inode=14779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=65 name=(null) inode=14781 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=66 name=(null) inode=14779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=67 name=(null) inode=14782 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=68 name=(null) inode=14779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=69 name=(null) inode=14783 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=70 name=(null) inode=14779 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=71 name=(null) inode=14784 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=72 name=(null) inode=14776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=73 name=(null) inode=14785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=74 name=(null) inode=14785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=75 name=(null) inode=14786 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=76 name=(null) inode=14785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=77 name=(null) inode=14787 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=78 name=(null) inode=14785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=79 name=(null) inode=14788 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=80 name=(null) inode=14785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=81 name=(null) inode=14789 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=82 name=(null) inode=14785 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=83 name=(null) inode=14790 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=84 name=(null) inode=14776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=85 name=(null) inode=14791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=86 name=(null) inode=14791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=87 name=(null) inode=14792 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=88 name=(null) inode=14791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=89 name=(null) inode=14793 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=90 name=(null) inode=14791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=91 name=(null) inode=14794 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=92 name=(null) inode=14791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=93 name=(null) inode=14795 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=94 name=(null) inode=14791 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=95 name=(null) inode=14796 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=96 name=(null) inode=14776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=97 name=(null) inode=14797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=98 name=(null) inode=14797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=99 name=(null) inode=14798 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=100 name=(null) inode=14797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=101 name=(null) inode=14799 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=102 name=(null) inode=14797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=103 name=(null) inode=14800 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=104 name=(null) inode=14797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=105 name=(null) inode=14801 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=106 name=(null) inode=14797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=107 name=(null) inode=14802 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PATH item=109 name=(null) inode=14803 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:33.066000 audit: PROCTITLE proctitle="(udev-worker)" Nov 1 00:42:33.284139 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:42:33.301920 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:42:33.302110 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Nov 1 00:42:33.325293 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 1 00:42:33.339226 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:42:33.357058 systemd[1]: Finished systemd-udev-settle.service. Nov 1 00:42:33.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:33.367628 systemd[1]: Starting lvm2-activation-early.service... Nov 1 00:42:33.398363 lvm[1090]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:42:33.429426 systemd[1]: Finished lvm2-activation-early.service. Nov 1 00:42:33.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:33.439826 systemd[1]: Reached target cryptsetup.target. Nov 1 00:42:33.450343 systemd[1]: Starting lvm2-activation.service... Nov 1 00:42:33.456903 lvm[1092]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:42:33.489484 systemd[1]: Finished lvm2-activation.service. Nov 1 00:42:33.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:33.498863 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:42:33.507432 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:42:33.507500 systemd[1]: Reached target local-fs.target. Nov 1 00:42:33.516420 systemd[1]: Reached target machines.target. Nov 1 00:42:33.527535 systemd[1]: Starting ldconfig.service... Nov 1 00:42:33.536369 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:42:33.536488 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:33.539036 systemd[1]: Starting systemd-boot-update.service... Nov 1 00:42:33.549627 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 00:42:33.562994 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 00:42:33.566508 systemd[1]: Starting systemd-sysext.service... Nov 1 00:42:33.567435 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1095 (bootctl) Nov 1 00:42:33.569864 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 00:42:33.584914 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 00:42:33.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:33.607123 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 00:42:33.617223 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 00:42:33.617687 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 00:42:33.654217 kernel: loop0: detected capacity change from 0 to 224512 Nov 1 00:42:33.759641 systemd-fsck[1105]: fsck.fat 4.2 (2021-01-31) Nov 1 00:42:33.759641 systemd-fsck[1105]: /dev/sda1: 790 files, 120773/258078 clusters Nov 1 00:42:33.763804 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 00:42:33.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:33.779341 systemd[1]: Mounting boot.mount... Nov 1 00:42:33.804530 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:42:33.805639 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 00:42:33.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:33.818917 systemd[1]: Mounted boot.mount. Nov 1 00:42:33.840233 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:42:33.848480 systemd[1]: Finished systemd-boot-update.service. Nov 1 00:42:33.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:33.871215 kernel: loop1: detected capacity change from 0 to 224512 Nov 1 00:42:33.895078 (sd-sysext)[1117]: Using extensions 'kubernetes'. Nov 1 00:42:33.897255 (sd-sysext)[1117]: Merged extensions into '/usr'. Nov 1 00:42:33.929196 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:33.932400 systemd[1]: Mounting usr-share-oem.mount... Nov 1 00:42:33.939789 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:42:33.942806 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:42:33.953506 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:42:33.964145 systemd[1]: Starting modprobe@loop.service... Nov 1 00:42:33.971428 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:42:33.971713 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:33.971943 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:33.978115 systemd[1]: Mounted usr-share-oem.mount. Nov 1 00:42:33.986056 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:42:33.986409 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:42:33.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:33.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:33.996242 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:42:33.996500 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:42:34.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:34.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:34.006235 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:42:34.006562 systemd[1]: Finished modprobe@loop.service. Nov 1 00:42:34.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:34.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:34.016447 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:42:34.016725 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:42:34.019576 systemd[1]: Finished systemd-sysext.service. Nov 1 00:42:34.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:34.030894 systemd[1]: Starting ensure-sysext.service... Nov 1 00:42:34.040729 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 00:42:34.054121 systemd[1]: Reloading. Nov 1 00:42:34.066142 systemd-tmpfiles[1131]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 00:42:34.074135 systemd-tmpfiles[1131]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:42:34.080918 systemd-tmpfiles[1131]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:42:34.238872 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2025-11-01T00:42:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:42:34.238924 /usr/lib/systemd/system-generators/torcx-generator[1153]: time="2025-11-01T00:42:34Z" level=info msg="torcx already run" Nov 1 00:42:34.311980 ldconfig[1094]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:42:34.412253 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:42:34.412582 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:42:34.453758 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:42:34.513376 systemd-networkd[1062]: eth0: Gained IPv6LL Nov 1 00:42:34.547381 systemd[1]: Finished ldconfig.service. Nov 1 00:42:34.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:34.557446 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 00:42:34.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:34.573825 systemd[1]: Starting audit-rules.service... Nov 1 00:42:34.585032 systemd[1]: Starting clean-ca-certificates.service... Nov 1 00:42:34.596444 systemd[1]: Starting oem-gce-enable-oslogin.service... Nov 1 00:42:34.608399 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 00:42:34.621136 systemd[1]: Starting systemd-resolved.service... Nov 1 00:42:34.633379 systemd[1]: Starting systemd-timesyncd.service... Nov 1 00:42:34.644384 systemd[1]: Starting systemd-update-utmp.service... Nov 1 00:42:34.659137 systemd[1]: Finished clean-ca-certificates.service. Nov 1 00:42:34.663000 audit[1231]: SYSTEM_BOOT pid=1231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 00:42:34.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:34.670312 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Nov 1 00:42:34.670767 systemd[1]: Finished oem-gce-enable-oslogin.service. Nov 1 00:42:34.674000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:42:34.674000 audit[1236]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd0b8259b0 a2=420 a3=0 items=0 ppid=1204 pid=1236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:34.674000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 00:42:34.676075 augenrules[1236]: No rules Nov 1 00:42:34.680451 systemd[1]: Finished audit-rules.service. Nov 1 00:42:34.689467 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 00:42:34.712720 systemd[1]: Finished systemd-update-utmp.service. Nov 1 00:42:34.745013 systemd[1]: Finished ensure-sysext.service. Nov 1 00:42:34.768416 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:34.769021 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:42:34.773502 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:42:34.788627 systemd[1]: Starting modprobe@drm.service... Nov 1 00:42:34.805717 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:42:34.818307 systemd[1]: Starting modprobe@loop.service... Nov 1 00:42:34.828081 systemd[1]: Starting oem-gce-enable-oslogin.service... Nov 1 00:42:34.836529 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:42:34.836660 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:34.839242 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:42:34.840771 enable-oslogin[1253]: /etc/pam.d/sshd already exists. Not enabling OS Login Nov 1 00:42:34.850339 systemd[1]: Starting systemd-update-done.service... Nov 1 00:42:34.857385 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:42:34.857478 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:34.859413 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:42:34.859768 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:42:34.869011 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:42:34.869356 systemd[1]: Finished modprobe@drm.service. Nov 1 00:42:34.877930 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:42:34.878269 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:42:34.878670 systemd-resolved[1220]: Positive Trust Anchors: Nov 1 00:42:34.879157 systemd-resolved[1220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:42:34.879350 systemd-resolved[1220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:42:34.887002 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:42:34.887367 systemd[1]: Finished modprobe@loop.service. Nov 1 00:42:34.895941 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Nov 1 00:42:34.896384 systemd[1]: Finished oem-gce-enable-oslogin.service. Nov 1 00:42:34.897807 systemd-resolved[1220]: Defaulting to hostname 'linux'. Nov 1 00:42:34.906150 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:42:34.916677 systemd[1]: Started systemd-resolved.service. Nov 1 00:42:34.925987 systemd[1]: Finished systemd-update-done.service. Nov 1 00:42:34.934891 systemd[1]: Reached target network.target. Nov 1 00:42:34.944400 systemd[1]: Reached target network-online.target. Nov 1 00:42:34.953417 systemd[1]: Reached target nss-lookup.target. Nov 1 00:42:34.962456 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:42:34.962543 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:42:34.962843 systemd[1]: Started systemd-timesyncd.service. Nov 1 00:42:34.964653 systemd-timesyncd[1225]: Contacted time server 169.254.169.254:123 (169.254.169.254). Nov 1 00:42:34.964750 systemd-timesyncd[1225]: Initial clock synchronization to Sat 2025-11-01 00:42:34.877956 UTC. Nov 1 00:42:34.971787 systemd[1]: Reached target sysinit.target. Nov 1 00:42:34.980526 systemd[1]: Started motdgen.path. Nov 1 00:42:34.987458 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 00:42:34.997401 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 00:42:35.006380 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:42:35.006451 systemd[1]: Reached target paths.target. Nov 1 00:42:35.013343 systemd[1]: Reached target time-set.target. Nov 1 00:42:35.021675 systemd[1]: Started logrotate.timer. Nov 1 00:42:35.028608 systemd[1]: Started mdadm.timer. Nov 1 00:42:35.035358 systemd[1]: Reached target timers.target. Nov 1 00:42:35.042913 systemd[1]: Listening on dbus.socket. Nov 1 00:42:35.052376 systemd[1]: Starting docker.socket... Nov 1 00:42:35.062324 systemd[1]: Listening on sshd.socket. Nov 1 00:42:35.069501 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:35.070296 systemd[1]: Listening on docker.socket. Nov 1 00:42:35.077484 systemd[1]: Reached target sockets.target. Nov 1 00:42:35.086398 systemd[1]: Reached target basic.target. Nov 1 00:42:35.093659 systemd[1]: System is tainted: cgroupsv1 Nov 1 00:42:35.093773 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:42:35.093817 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:42:35.095861 systemd[1]: Starting containerd.service... Nov 1 00:42:35.105812 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Nov 1 00:42:35.116762 systemd[1]: Starting dbus.service... Nov 1 00:42:35.125790 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 00:42:35.135800 systemd[1]: Starting extend-filesystems.service... Nov 1 00:42:35.143393 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 00:42:35.146638 systemd[1]: Starting kubelet.service... Nov 1 00:42:35.150511 jq[1268]: false Nov 1 00:42:35.155827 systemd[1]: Starting motdgen.service... Nov 1 00:42:35.165982 systemd[1]: Starting oem-gce.service... Nov 1 00:42:35.176322 systemd[1]: Starting prepare-helm.service... Nov 1 00:42:35.185924 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 00:42:35.195802 systemd[1]: Starting sshd-keygen.service... Nov 1 00:42:35.207539 systemd[1]: Starting systemd-logind.service... Nov 1 00:42:35.214360 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:35.214519 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Nov 1 00:42:35.217037 systemd[1]: Starting update-engine.service... Nov 1 00:42:35.227037 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 00:42:35.235204 jq[1290]: true Nov 1 00:42:35.239573 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:42:35.240063 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 00:42:35.246710 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:42:35.253710 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 00:42:35.299371 mkfs.ext4[1302]: mke2fs 1.46.5 (30-Dec-2021) Nov 1 00:42:35.306577 mkfs.ext4[1302]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Nov 1 00:42:35.306773 mkfs.ext4[1302]: Creating filesystem with 262144 4k blocks and 65536 inodes Nov 1 00:42:35.306773 mkfs.ext4[1302]: Filesystem UUID: 96862e47-448c-4f7d-8b7b-5e7ff8d6d2dd Nov 1 00:42:35.306773 mkfs.ext4[1302]: Superblock backups stored on blocks: Nov 1 00:42:35.306773 mkfs.ext4[1302]: 32768, 98304, 163840, 229376 Nov 1 00:42:35.306773 mkfs.ext4[1302]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Nov 1 00:42:35.307001 mkfs.ext4[1302]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Nov 1 00:42:35.307807 mkfs.ext4[1302]: Creating journal (8192 blocks): done Nov 1 00:42:35.321791 mkfs.ext4[1302]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Nov 1 00:42:35.338522 jq[1300]: true Nov 1 00:42:35.344261 extend-filesystems[1269]: Found loop1 Nov 1 00:42:35.344261 extend-filesystems[1269]: Found sda Nov 1 00:42:35.344261 extend-filesystems[1269]: Found sda1 Nov 1 00:42:35.344261 extend-filesystems[1269]: Found sda2 Nov 1 00:42:35.344261 extend-filesystems[1269]: Found sda3 Nov 1 00:42:35.344261 extend-filesystems[1269]: Found usr Nov 1 00:42:35.344261 extend-filesystems[1269]: Found sda4 Nov 1 00:42:35.344261 extend-filesystems[1269]: Found sda6 Nov 1 00:42:35.344261 extend-filesystems[1269]: Found sda7 Nov 1 00:42:35.344261 extend-filesystems[1269]: Found sda9 Nov 1 00:42:35.344261 extend-filesystems[1269]: Checking size of /dev/sda9 Nov 1 00:42:35.392856 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:42:35.442693 extend-filesystems[1269]: Resized partition /dev/sda9 Nov 1 00:42:35.393660 systemd[1]: Finished motdgen.service. Nov 1 00:42:35.451887 umount[1322]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Nov 1 00:42:35.465397 extend-filesystems[1328]: resize2fs 1.46.5 (30-Dec-2021) Nov 1 00:42:35.491015 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Nov 1 00:42:35.491236 kernel: loop2: detected capacity change from 0 to 2097152 Nov 1 00:42:35.493688 update_engine[1287]: I1101 00:42:35.493609 1287 main.cc:92] Flatcar Update Engine starting Nov 1 00:42:35.507764 tar[1298]: linux-amd64/LICENSE Nov 1 00:42:35.509383 tar[1298]: linux-amd64/helm Nov 1 00:42:35.510103 dbus-daemon[1267]: [system] SELinux support is enabled Nov 1 00:42:35.510523 systemd[1]: Started dbus.service. Nov 1 00:42:35.522043 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:42:35.522116 systemd[1]: Reached target system-config.target. Nov 1 00:42:35.527935 dbus-daemon[1267]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1062 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 1 00:42:35.531107 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:42:35.531200 systemd[1]: Reached target user-config.target. Nov 1 00:42:35.538463 update_engine[1287]: I1101 00:42:35.538293 1287 update_check_scheduler.cc:74] Next update check in 2m6s Nov 1 00:42:35.544410 systemd[1]: Started update-engine.service. Nov 1 00:42:35.544870 dbus-daemon[1267]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 00:42:35.560071 systemd[1]: Started locksmithd.service. Nov 1 00:42:35.576690 systemd[1]: Starting systemd-hostnamed.service... Nov 1 00:42:35.640210 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 00:42:35.648204 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Nov 1 00:42:35.714523 extend-filesystems[1328]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 1 00:42:35.714523 extend-filesystems[1328]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 1 00:42:35.714523 extend-filesystems[1328]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Nov 1 00:42:35.744540 extend-filesystems[1269]: Resized filesystem in /dev/sda9 Nov 1 00:42:35.733551 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 00:42:35.762889 bash[1348]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:42:35.755186 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:42:35.755685 systemd[1]: Finished extend-filesystems.service. Nov 1 00:42:35.776962 env[1301]: time="2025-11-01T00:42:35.776866996Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 00:42:35.822714 systemd-logind[1286]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 00:42:35.822766 systemd-logind[1286]: Watching system buttons on /dev/input/event2 (Sleep Button) Nov 1 00:42:35.822801 systemd-logind[1286]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:42:35.827508 systemd-logind[1286]: New seat seat0. Nov 1 00:42:35.833823 systemd[1]: Started systemd-logind.service. Nov 1 00:42:35.933136 coreos-metadata[1266]: Nov 01 00:42:35.932 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Nov 1 00:42:35.937518 coreos-metadata[1266]: Nov 01 00:42:35.937 INFO Fetch failed with 404: resource not found Nov 1 00:42:35.937518 coreos-metadata[1266]: Nov 01 00:42:35.937 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Nov 1 00:42:35.938247 coreos-metadata[1266]: Nov 01 00:42:35.938 INFO Fetch successful Nov 1 00:42:35.938247 coreos-metadata[1266]: Nov 01 00:42:35.938 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Nov 1 00:42:35.938784 coreos-metadata[1266]: Nov 01 00:42:35.938 INFO Fetch failed with 404: resource not found Nov 1 00:42:35.938784 coreos-metadata[1266]: Nov 01 00:42:35.938 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Nov 1 00:42:35.939294 coreos-metadata[1266]: Nov 01 00:42:35.939 INFO Fetch failed with 404: resource not found Nov 1 00:42:35.939294 coreos-metadata[1266]: Nov 01 00:42:35.939 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Nov 1 00:42:35.940540 coreos-metadata[1266]: Nov 01 00:42:35.940 INFO Fetch successful Nov 1 00:42:35.942871 unknown[1266]: wrote ssh authorized keys file for user: core Nov 1 00:42:35.974502 dbus-daemon[1267]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 1 00:42:35.974784 systemd[1]: Started systemd-hostnamed.service. Nov 1 00:42:35.975754 dbus-daemon[1267]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1345 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 1 00:42:35.989551 systemd[1]: Starting polkit.service... Nov 1 00:42:35.994677 update-ssh-keys[1358]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:42:35.997150 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Nov 1 00:42:36.067952 env[1301]: time="2025-11-01T00:42:36.067773061Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:42:36.068186 env[1301]: time="2025-11-01T00:42:36.068107301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:36.075948 env[1301]: time="2025-11-01T00:42:36.075879071Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:42:36.075948 env[1301]: time="2025-11-01T00:42:36.075944942Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:36.077673 env[1301]: time="2025-11-01T00:42:36.077615729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:42:36.088406 env[1301]: time="2025-11-01T00:42:36.088278706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:36.088600 env[1301]: time="2025-11-01T00:42:36.088471073Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 00:42:36.088600 env[1301]: time="2025-11-01T00:42:36.088512739Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:36.088833 env[1301]: time="2025-11-01T00:42:36.088785080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:36.089425 env[1301]: time="2025-11-01T00:42:36.089383134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:36.090914 env[1301]: time="2025-11-01T00:42:36.089953493Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:42:36.090914 env[1301]: time="2025-11-01T00:42:36.090016032Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:42:36.090914 env[1301]: time="2025-11-01T00:42:36.090150592Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 00:42:36.090914 env[1301]: time="2025-11-01T00:42:36.090202730Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:42:36.098602 env[1301]: time="2025-11-01T00:42:36.098493762Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:42:36.098771 env[1301]: time="2025-11-01T00:42:36.098614001Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:42:36.098771 env[1301]: time="2025-11-01T00:42:36.098641153Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:42:36.098771 env[1301]: time="2025-11-01T00:42:36.098727522Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:42:36.098771 env[1301]: time="2025-11-01T00:42:36.098753134Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:42:36.098965 env[1301]: time="2025-11-01T00:42:36.098844097Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:42:36.098965 env[1301]: time="2025-11-01T00:42:36.098872695Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:42:36.098965 env[1301]: time="2025-11-01T00:42:36.098902623Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:42:36.098965 env[1301]: time="2025-11-01T00:42:36.098928501Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 00:42:36.098965 env[1301]: time="2025-11-01T00:42:36.098953963Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:42:36.099230 env[1301]: time="2025-11-01T00:42:36.098977283Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:42:36.099230 env[1301]: time="2025-11-01T00:42:36.099004833Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:42:36.099230 env[1301]: time="2025-11-01T00:42:36.099210515Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:42:36.100605 env[1301]: time="2025-11-01T00:42:36.099392708Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:42:36.100605 env[1301]: time="2025-11-01T00:42:36.100025361Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:42:36.100605 env[1301]: time="2025-11-01T00:42:36.100073590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:42:36.100605 env[1301]: time="2025-11-01T00:42:36.100101266Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:42:36.100605 env[1301]: time="2025-11-01T00:42:36.100210924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:42:36.100605 env[1301]: time="2025-11-01T00:42:36.100239590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:42:36.100605 env[1301]: time="2025-11-01T00:42:36.100266183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:42:36.100605 env[1301]: time="2025-11-01T00:42:36.100290372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:42:36.100605 env[1301]: time="2025-11-01T00:42:36.100313219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:42:36.100605 env[1301]: time="2025-11-01T00:42:36.100336028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:42:36.100605 env[1301]: time="2025-11-01T00:42:36.100358320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:42:36.100605 env[1301]: time="2025-11-01T00:42:36.100381064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:42:36.100605 env[1301]: time="2025-11-01T00:42:36.100406458Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:42:36.101274 env[1301]: time="2025-11-01T00:42:36.100615907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:42:36.101274 env[1301]: time="2025-11-01T00:42:36.100641030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:42:36.101274 env[1301]: time="2025-11-01T00:42:36.100661667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:42:36.101274 env[1301]: time="2025-11-01T00:42:36.100682577Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:42:36.101274 env[1301]: time="2025-11-01T00:42:36.100712596Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 00:42:36.101274 env[1301]: time="2025-11-01T00:42:36.100731650Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:42:36.101274 env[1301]: time="2025-11-01T00:42:36.100764214Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 00:42:36.101274 env[1301]: time="2025-11-01T00:42:36.100820761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:42:36.101628 env[1301]: time="2025-11-01T00:42:36.101209615Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:42:36.101628 env[1301]: time="2025-11-01T00:42:36.101311480Z" level=info msg="Connect containerd service" Nov 1 00:42:36.101628 env[1301]: time="2025-11-01T00:42:36.101387503Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:42:36.105296 env[1301]: time="2025-11-01T00:42:36.105251831Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:42:36.110659 env[1301]: time="2025-11-01T00:42:36.110579577Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:42:36.110811 env[1301]: time="2025-11-01T00:42:36.110739112Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:42:36.111050 systemd[1]: Started containerd.service. Nov 1 00:42:36.123232 env[1301]: time="2025-11-01T00:42:36.121021212Z" level=info msg="Start subscribing containerd event" Nov 1 00:42:36.123232 env[1301]: time="2025-11-01T00:42:36.121143049Z" level=info msg="Start recovering state" Nov 1 00:42:36.123232 env[1301]: time="2025-11-01T00:42:36.121286417Z" level=info msg="Start event monitor" Nov 1 00:42:36.123232 env[1301]: time="2025-11-01T00:42:36.121313210Z" level=info msg="Start snapshots syncer" Nov 1 00:42:36.123232 env[1301]: time="2025-11-01T00:42:36.121338134Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:42:36.123232 env[1301]: time="2025-11-01T00:42:36.121353991Z" level=info msg="Start streaming server" Nov 1 00:42:36.123232 env[1301]: time="2025-11-01T00:42:36.121913721Z" level=info msg="containerd successfully booted in 0.419350s" Nov 1 00:42:36.134372 polkitd[1360]: Started polkitd version 121 Nov 1 00:42:36.160335 polkitd[1360]: Loading rules from directory /etc/polkit-1/rules.d Nov 1 00:42:36.160464 polkitd[1360]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 1 00:42:36.170476 polkitd[1360]: Finished loading, compiling and executing 2 rules Nov 1 00:42:36.172020 dbus-daemon[1267]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 1 00:42:36.172280 systemd[1]: Started polkit.service. Nov 1 00:42:36.173015 polkitd[1360]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 1 00:42:36.205835 systemd-hostnamed[1345]: Hostname set to (transient) Nov 1 00:42:36.209266 systemd-resolved[1220]: System hostname changed to 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762'. Nov 1 00:42:37.705971 tar[1298]: linux-amd64/README.md Nov 1 00:42:37.734854 systemd[1]: Finished prepare-helm.service. Nov 1 00:42:38.115646 systemd[1]: Started kubelet.service. Nov 1 00:42:38.138700 locksmithd[1341]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:42:39.138362 sshd_keygen[1303]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:42:39.224617 systemd[1]: Finished sshd-keygen.service. Nov 1 00:42:39.234718 systemd[1]: Starting issuegen.service... Nov 1 00:42:39.251145 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:42:39.251648 systemd[1]: Finished issuegen.service. Nov 1 00:42:39.263906 systemd[1]: Starting systemd-user-sessions.service... Nov 1 00:42:39.280795 systemd[1]: Finished systemd-user-sessions.service. Nov 1 00:42:39.292777 systemd[1]: Started getty@tty1.service. Nov 1 00:42:39.303664 systemd[1]: Started serial-getty@ttyS0.service. Nov 1 00:42:39.314195 systemd[1]: Reached target getty.target. Nov 1 00:42:39.496327 kubelet[1392]: E1101 00:42:39.496179 1392 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:42:39.500108 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:42:39.500511 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:42:41.864498 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Nov 1 00:42:43.839590 systemd[1]: Created slice system-sshd.slice. Nov 1 00:42:43.851260 systemd[1]: Started sshd@0-10.128.0.16:22-139.178.68.195:55160.service. Nov 1 00:42:43.995326 kernel: loop2: detected capacity change from 0 to 2097152 Nov 1 00:42:44.009752 systemd-nspawn[1419]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Nov 1 00:42:44.009752 systemd-nspawn[1419]: Press ^] three times within 1s to kill container. Nov 1 00:42:44.023236 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 00:42:44.107682 systemd[1]: Started oem-gce.service. Nov 1 00:42:44.115804 systemd[1]: Reached target multi-user.target. Nov 1 00:42:44.127589 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 00:42:44.146375 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 00:42:44.146800 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 00:42:44.155149 systemd-nspawn[1419]: + '[' -e /etc/default/instance_configs.cfg.template ']' Nov 1 00:42:44.155495 systemd-nspawn[1419]: + echo -e '[InstanceSetup]\nset_host_keys = false' Nov 1 00:42:44.155740 systemd-nspawn[1419]: + /usr/bin/google_instance_setup Nov 1 00:42:44.160262 systemd[1]: Startup finished in 10.254s (kernel) + 17.174s (userspace) = 27.429s. Nov 1 00:42:44.202048 sshd[1417]: Accepted publickey for core from 139.178.68.195 port 55160 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:42:44.205691 sshd[1417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:44.227268 systemd[1]: Created slice user-500.slice. Nov 1 00:42:44.229499 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 00:42:44.235494 systemd-logind[1286]: New session 1 of user core. Nov 1 00:42:44.252490 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 00:42:44.257531 systemd[1]: Starting user@500.service... Nov 1 00:42:44.282874 (systemd)[1431]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:44.430769 systemd[1431]: Queued start job for default target default.target. Nov 1 00:42:44.432344 systemd[1431]: Reached target paths.target. Nov 1 00:42:44.432395 systemd[1431]: Reached target sockets.target. Nov 1 00:42:44.432419 systemd[1431]: Reached target timers.target. Nov 1 00:42:44.432441 systemd[1431]: Reached target basic.target. Nov 1 00:42:44.432698 systemd[1]: Started user@500.service. Nov 1 00:42:44.433282 systemd[1431]: Reached target default.target. Nov 1 00:42:44.433362 systemd[1431]: Startup finished in 137ms. Nov 1 00:42:44.434541 systemd[1]: Started session-1.scope. Nov 1 00:42:44.661842 systemd[1]: Started sshd@1-10.128.0.16:22-139.178.68.195:55176.service. Nov 1 00:42:44.969472 instance-setup[1427]: INFO Running google_set_multiqueue. Nov 1 00:42:44.971515 sshd[1440]: Accepted publickey for core from 139.178.68.195 port 55176 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:42:44.974141 sshd[1440]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:44.982095 systemd-logind[1286]: New session 2 of user core. Nov 1 00:42:44.983004 systemd[1]: Started session-2.scope. Nov 1 00:42:45.006274 instance-setup[1427]: INFO Set channels for eth0 to 2. Nov 1 00:42:45.010363 instance-setup[1427]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Nov 1 00:42:45.012126 instance-setup[1427]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Nov 1 00:42:45.012675 instance-setup[1427]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Nov 1 00:42:45.014683 instance-setup[1427]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Nov 1 00:42:45.015223 instance-setup[1427]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Nov 1 00:42:45.016715 instance-setup[1427]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Nov 1 00:42:45.017203 instance-setup[1427]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Nov 1 00:42:45.018653 instance-setup[1427]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Nov 1 00:42:45.031509 instance-setup[1427]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Nov 1 00:42:45.031974 instance-setup[1427]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Nov 1 00:42:45.085384 systemd-nspawn[1419]: + /usr/bin/google_metadata_script_runner --script-type startup Nov 1 00:42:45.200657 sshd[1440]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:45.209270 systemd-logind[1286]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:42:45.209648 systemd[1]: sshd@1-10.128.0.16:22-139.178.68.195:55176.service: Deactivated successfully. Nov 1 00:42:45.211214 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:42:45.215494 systemd-logind[1286]: Removed session 2. Nov 1 00:42:45.244267 systemd[1]: Started sshd@2-10.128.0.16:22-139.178.68.195:55190.service. Nov 1 00:42:45.473193 startup-script[1474]: INFO Starting startup scripts. Nov 1 00:42:45.486667 startup-script[1474]: INFO No startup scripts found in metadata. Nov 1 00:42:45.486836 startup-script[1474]: INFO Finished running startup scripts. Nov 1 00:42:45.534785 systemd-nspawn[1419]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Nov 1 00:42:45.534785 systemd-nspawn[1419]: + daemon_pids=() Nov 1 00:42:45.534785 systemd-nspawn[1419]: + for d in accounts clock_skew network Nov 1 00:42:45.534785 systemd-nspawn[1419]: + daemon_pids+=($!) Nov 1 00:42:45.534785 systemd-nspawn[1419]: + /usr/bin/google_accounts_daemon Nov 1 00:42:45.534785 systemd-nspawn[1419]: + for d in accounts clock_skew network Nov 1 00:42:45.534785 systemd-nspawn[1419]: + daemon_pids+=($!) Nov 1 00:42:45.534785 systemd-nspawn[1419]: + for d in accounts clock_skew network Nov 1 00:42:45.534785 systemd-nspawn[1419]: + daemon_pids+=($!) Nov 1 00:42:45.534785 systemd-nspawn[1419]: + NOTIFY_SOCKET=/run/systemd/notify Nov 1 00:42:45.535861 systemd-nspawn[1419]: + /usr/bin/systemd-notify --ready Nov 1 00:42:45.536662 systemd-nspawn[1419]: + /usr/bin/google_clock_skew_daemon Nov 1 00:42:45.536856 systemd-nspawn[1419]: + /usr/bin/google_network_daemon Nov 1 00:42:45.558554 sshd[1478]: Accepted publickey for core from 139.178.68.195 port 55190 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:42:45.560478 sshd[1478]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:45.570165 systemd[1]: Started session-3.scope. Nov 1 00:42:45.571095 systemd-logind[1286]: New session 3 of user core. Nov 1 00:42:45.619618 systemd-nspawn[1419]: + wait -n 36 37 38 Nov 1 00:42:45.771495 sshd[1478]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:45.776550 systemd[1]: sshd@2-10.128.0.16:22-139.178.68.195:55190.service: Deactivated successfully. Nov 1 00:42:45.778012 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:42:45.779572 systemd-logind[1286]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:42:45.788510 systemd-logind[1286]: Removed session 3. Nov 1 00:42:45.814559 systemd[1]: Started sshd@3-10.128.0.16:22-139.178.68.195:55194.service. Nov 1 00:42:46.141487 sshd[1491]: Accepted publickey for core from 139.178.68.195 port 55194 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:42:46.143292 sshd[1491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:46.152425 systemd[1]: Started session-4.scope. Nov 1 00:42:46.152794 systemd-logind[1286]: New session 4 of user core. Nov 1 00:42:46.221397 google-networking[1484]: INFO Starting Google Networking daemon. Nov 1 00:42:46.367480 sshd[1491]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:46.374956 systemd[1]: sshd@3-10.128.0.16:22-139.178.68.195:55194.service: Deactivated successfully. Nov 1 00:42:46.376447 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:42:46.378009 systemd-logind[1286]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:42:46.383426 systemd-logind[1286]: Removed session 4. Nov 1 00:42:46.414431 systemd[1]: Started sshd@4-10.128.0.16:22-139.178.68.195:55210.service. Nov 1 00:42:46.442592 groupadd[1505]: group added to /etc/group: name=google-sudoers, GID=1000 Nov 1 00:42:46.446116 groupadd[1505]: group added to /etc/gshadow: name=google-sudoers Nov 1 00:42:46.450758 groupadd[1505]: new group: name=google-sudoers, GID=1000 Nov 1 00:42:46.456974 google-clock-skew[1483]: INFO Starting Google Clock Skew daemon. Nov 1 00:42:46.469785 google-accounts[1482]: INFO Starting Google Accounts daemon. Nov 1 00:42:46.471199 google-clock-skew[1483]: INFO Clock drift token has changed: 0. Nov 1 00:42:46.476394 systemd-nspawn[1419]: hwclock: Cannot access the Hardware Clock via any known method. Nov 1 00:42:46.477161 systemd-nspawn[1419]: hwclock: Use the --verbose option to see the details of our search for an access method. Nov 1 00:42:46.478087 google-clock-skew[1483]: WARNING Failed to sync system time with hardware clock. Nov 1 00:42:46.502166 google-accounts[1482]: WARNING OS Login not installed. Nov 1 00:42:46.503317 google-accounts[1482]: INFO Creating a new user account for 0. Nov 1 00:42:46.509490 systemd-nspawn[1419]: useradd: invalid user name '0': use --badname to ignore Nov 1 00:42:46.510463 google-accounts[1482]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Nov 1 00:42:46.723008 sshd[1506]: Accepted publickey for core from 139.178.68.195 port 55210 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:42:46.725751 sshd[1506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:46.733202 systemd[1]: Started session-5.scope. Nov 1 00:42:46.733560 systemd-logind[1286]: New session 5 of user core. Nov 1 00:42:46.926235 sudo[1520]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:42:46.926720 sudo[1520]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:42:46.936467 dbus-daemon[1267]: \xd0\u000dL$JV: received setenforce notice (enforcing=-1031666528) Nov 1 00:42:46.939049 sudo[1520]: pam_unix(sudo:session): session closed for user root Nov 1 00:42:46.985057 sshd[1506]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:46.990479 systemd[1]: sshd@4-10.128.0.16:22-139.178.68.195:55210.service: Deactivated successfully. Nov 1 00:42:46.992692 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:42:46.993412 systemd-logind[1286]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:42:46.994931 systemd-logind[1286]: Removed session 5. Nov 1 00:42:47.027621 systemd[1]: Started sshd@5-10.128.0.16:22-139.178.68.195:55214.service. Nov 1 00:42:47.315296 sshd[1524]: Accepted publickey for core from 139.178.68.195 port 55214 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:42:47.316948 sshd[1524]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:47.324037 systemd[1]: Started session-6.scope. Nov 1 00:42:47.324425 systemd-logind[1286]: New session 6 of user core. Nov 1 00:42:47.492057 sudo[1529]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:42:47.492499 sudo[1529]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:42:47.496820 sudo[1529]: pam_unix(sudo:session): session closed for user root Nov 1 00:42:47.508794 sudo[1528]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:42:47.509229 sudo[1528]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:42:47.522111 systemd[1]: Stopping audit-rules.service... Nov 1 00:42:47.523000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Nov 1 00:42:47.529830 kernel: kauditd_printk_skb: 129 callbacks suppressed Nov 1 00:42:47.529912 kernel: audit: type=1305 audit(1761957767.523:139): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Nov 1 00:42:47.529950 auditctl[1532]: No rules Nov 1 00:42:47.530922 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:42:47.531299 systemd[1]: Stopped audit-rules.service. Nov 1 00:42:47.534615 systemd[1]: Starting audit-rules.service... Nov 1 00:42:47.523000 audit[1532]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff90897920 a2=420 a3=0 items=0 ppid=1 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:47.578768 kernel: audit: type=1300 audit(1761957767.523:139): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff90897920 a2=420 a3=0 items=0 ppid=1 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:47.523000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Nov 1 00:42:47.581769 systemd[1]: Finished audit-rules.service. Nov 1 00:42:47.582626 augenrules[1550]: No rules Nov 1 00:42:47.588194 kernel: audit: type=1327 audit(1761957767.523:139): proctitle=2F7362696E2F617564697463746C002D44 Nov 1 00:42:47.589145 sudo[1528]: pam_unix(sudo:session): session closed for user root Nov 1 00:42:47.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:47.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:47.614209 kernel: audit: type=1131 audit(1761957767.530:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:47.614279 kernel: audit: type=1130 audit(1761957767.578:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:47.637249 kernel: audit: type=1106 audit(1761957767.589:142): pid=1528 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:42:47.589000 audit[1528]: USER_END pid=1528 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:42:47.637834 sshd[1524]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:47.644185 systemd[1]: sshd@5-10.128.0.16:22-139.178.68.195:55214.service: Deactivated successfully. Nov 1 00:42:47.646492 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:42:47.647597 systemd-logind[1286]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:42:47.649227 systemd-logind[1286]: Removed session 6. Nov 1 00:42:47.589000 audit[1528]: CRED_DISP pid=1528 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:42:47.683698 kernel: audit: type=1104 audit(1761957767.589:143): pid=1528 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:42:47.683861 kernel: audit: type=1106 audit(1761957767.639:144): pid=1524 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:42:47.639000 audit[1524]: USER_END pid=1524 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:42:47.690308 systemd[1]: Started sshd@6-10.128.0.16:22-139.178.68.195:55228.service. Nov 1 00:42:47.639000 audit[1524]: CRED_DISP pid=1524 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:42:47.740041 kernel: audit: type=1104 audit(1761957767.639:145): pid=1524 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:42:47.740598 kernel: audit: type=1131 audit(1761957767.643:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.128.0.16:22-139.178.68.195:55214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:47.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.128.0.16:22-139.178.68.195:55214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:47.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.16:22-139.178.68.195:55228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:48.001000 audit[1557]: USER_ACCT pid=1557 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:42:48.004020 sshd[1557]: Accepted publickey for core from 139.178.68.195 port 55228 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:42:48.003000 audit[1557]: CRED_ACQ pid=1557 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:42:48.003000 audit[1557]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde8734930 a2=3 a3=0 items=0 ppid=1 pid=1557 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:48.003000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:42:48.005519 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:48.013335 systemd-logind[1286]: New session 7 of user core. Nov 1 00:42:48.013536 systemd[1]: Started session-7.scope. Nov 1 00:42:48.023000 audit[1557]: USER_START pid=1557 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:42:48.027000 audit[1560]: CRED_ACQ pid=1560 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:42:48.179000 audit[1561]: USER_ACCT pid=1561 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:42:48.181314 sudo[1561]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:42:48.180000 audit[1561]: CRED_REFR pid=1561 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:42:48.181807 sudo[1561]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:42:48.183000 audit[1561]: USER_START pid=1561 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:42:48.221577 systemd[1]: Starting docker.service... Nov 1 00:42:48.278738 env[1571]: time="2025-11-01T00:42:48.277625032Z" level=info msg="Starting up" Nov 1 00:42:48.280678 env[1571]: time="2025-11-01T00:42:48.280636424Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:42:48.280678 env[1571]: time="2025-11-01T00:42:48.280675097Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:42:48.280859 env[1571]: time="2025-11-01T00:42:48.280707716Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:42:48.280859 env[1571]: time="2025-11-01T00:42:48.280725113Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:42:48.283384 env[1571]: time="2025-11-01T00:42:48.283355674Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:42:48.283526 env[1571]: time="2025-11-01T00:42:48.283509351Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:42:48.283611 env[1571]: time="2025-11-01T00:42:48.283592728Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:42:48.283689 env[1571]: time="2025-11-01T00:42:48.283661056Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:42:48.296261 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1045739292-merged.mount: Deactivated successfully. Nov 1 00:42:48.883819 env[1571]: time="2025-11-01T00:42:48.883744846Z" level=warning msg="Your kernel does not support cgroup blkio weight" Nov 1 00:42:48.883819 env[1571]: time="2025-11-01T00:42:48.883785742Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Nov 1 00:42:48.884518 env[1571]: time="2025-11-01T00:42:48.884255052Z" level=info msg="Loading containers: start." Nov 1 00:42:48.976000 audit[1601]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:48.976000 audit[1601]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff40affef0 a2=0 a3=7fff40affedc items=0 ppid=1571 pid=1601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:48.976000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Nov 1 00:42:48.980000 audit[1603]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1603 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:48.980000 audit[1603]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdf054c790 a2=0 a3=7ffdf054c77c items=0 ppid=1571 pid=1603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:48.980000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Nov 1 00:42:48.984000 audit[1605]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1605 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:48.984000 audit[1605]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffebc5f7ba0 a2=0 a3=7ffebc5f7b8c items=0 ppid=1571 pid=1605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:48.984000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Nov 1 00:42:48.987000 audit[1607]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1607 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:48.987000 audit[1607]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe04bdc2d0 a2=0 a3=7ffe04bdc2bc items=0 ppid=1571 pid=1607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:48.987000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Nov 1 00:42:48.991000 audit[1609]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1609 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:48.991000 audit[1609]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd1ce5a2b0 a2=0 a3=7ffd1ce5a29c items=0 ppid=1571 pid=1609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:48.991000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Nov 1 00:42:49.016000 audit[1614]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1614 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:49.016000 audit[1614]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffaa5fc320 a2=0 a3=7fffaa5fc30c items=0 ppid=1571 pid=1614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:49.016000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Nov 1 00:42:49.027000 audit[1616]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1616 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:49.027000 audit[1616]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffee96cb7f0 a2=0 a3=7ffee96cb7dc items=0 ppid=1571 pid=1616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:49.027000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Nov 1 00:42:49.031000 audit[1618]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1618 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:49.031000 audit[1618]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc24bb0590 a2=0 a3=7ffc24bb057c items=0 ppid=1571 pid=1618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:49.031000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Nov 1 00:42:49.034000 audit[1620]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1620 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:49.034000 audit[1620]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7fff722afc70 a2=0 a3=7fff722afc5c items=0 ppid=1571 pid=1620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:49.034000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:42:49.049000 audit[1624]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1624 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:49.049000 audit[1624]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffcb4091ca0 a2=0 a3=7ffcb4091c8c items=0 ppid=1571 pid=1624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:49.049000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:42:49.055000 audit[1625]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1625 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:49.055000 audit[1625]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd11e20b20 a2=0 a3=7ffd11e20b0c items=0 ppid=1571 pid=1625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:49.055000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:42:49.075227 kernel: Initializing XFRM netlink socket Nov 1 00:42:49.124683 env[1571]: time="2025-11-01T00:42:49.124606007Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 00:42:49.158000 audit[1633]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1633 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:49.158000 audit[1633]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffde51df130 a2=0 a3=7ffde51df11c items=0 ppid=1571 pid=1633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:49.158000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Nov 1 00:42:49.171000 audit[1636]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1636 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:49.171000 audit[1636]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffcc3f88cc0 a2=0 a3=7ffcc3f88cac items=0 ppid=1571 pid=1636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:49.171000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Nov 1 00:42:49.175000 audit[1639]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1639 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:49.175000 audit[1639]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd8cb50a40 a2=0 a3=7ffd8cb50a2c items=0 ppid=1571 pid=1639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:49.175000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Nov 1 00:42:49.178000 audit[1641]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1641 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:49.178000 audit[1641]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe8c7d7a50 a2=0 a3=7ffe8c7d7a3c items=0 ppid=1571 pid=1641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:49.178000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Nov 1 00:42:49.182000 audit[1643]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1643 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:49.182000 audit[1643]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7fffaa84ee10 a2=0 a3=7fffaa84edfc items=0 ppid=1571 pid=1643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:49.182000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Nov 1 00:42:49.185000 audit[1645]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1645 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:49.185000 audit[1645]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fffc7a48af0 a2=0 a3=7fffc7a48adc items=0 ppid=1571 pid=1645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:49.185000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Nov 1 00:42:49.189000 audit[1647]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1647 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:49.189000 audit[1647]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffc29a863f0 a2=0 a3=7ffc29a863dc items=0 ppid=1571 pid=1647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:49.189000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Nov 1 00:42:49.202000 audit[1650]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1650 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:49.202000 audit[1650]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffe1d147050 a2=0 a3=7ffe1d14703c items=0 ppid=1571 pid=1650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:49.202000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Nov 1 00:42:49.206000 audit[1652]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1652 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:49.206000 audit[1652]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7fff6ec32d20 a2=0 a3=7fff6ec32d0c items=0 ppid=1571 pid=1652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:49.206000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Nov 1 00:42:49.210000 audit[1654]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1654 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:49.210000 audit[1654]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffeeded4560 a2=0 a3=7ffeeded454c items=0 ppid=1571 pid=1654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:49.210000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Nov 1 00:42:49.214000 audit[1656]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1656 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:49.214000 audit[1656]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe81f00750 a2=0 a3=7ffe81f0073c items=0 ppid=1571 pid=1656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:49.214000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Nov 1 00:42:49.216338 systemd-networkd[1062]: docker0: Link UP Nov 1 00:42:49.229000 audit[1660]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1660 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:49.229000 audit[1660]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff338fa5f0 a2=0 a3=7fff338fa5dc items=0 ppid=1571 pid=1660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:49.229000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:42:49.235000 audit[1661]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1661 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:49.235000 audit[1661]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe2204fbe0 a2=0 a3=7ffe2204fbcc items=0 ppid=1571 pid=1661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:49.235000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:42:49.237676 env[1571]: time="2025-11-01T00:42:49.237606565Z" level=info msg="Loading containers: done." Nov 1 00:42:49.262792 env[1571]: time="2025-11-01T00:42:49.262710757Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:42:49.263125 env[1571]: time="2025-11-01T00:42:49.263088169Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 00:42:49.263338 env[1571]: time="2025-11-01T00:42:49.263294044Z" level=info msg="Daemon has completed initialization" Nov 1 00:42:49.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:49.287312 systemd[1]: Started docker.service. Nov 1 00:42:49.304342 env[1571]: time="2025-11-01T00:42:49.304248303Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:42:49.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:49.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:49.752365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:42:49.752722 systemd[1]: Stopped kubelet.service. Nov 1 00:42:49.755905 systemd[1]: Starting kubelet.service... Nov 1 00:42:50.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:50.069133 systemd[1]: Started kubelet.service. Nov 1 00:42:50.182156 kubelet[1699]: E1101 00:42:50.181860 1699 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:42:50.188011 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:42:50.188350 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:42:50.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:42:50.556854 env[1301]: time="2025-11-01T00:42:50.556667124Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 00:42:51.086012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2162222309.mount: Deactivated successfully. Nov 1 00:42:52.827623 env[1301]: time="2025-11-01T00:42:52.827541795Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:52.830685 env[1301]: time="2025-11-01T00:42:52.830635305Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:52.833618 env[1301]: time="2025-11-01T00:42:52.833567899Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:52.838161 env[1301]: time="2025-11-01T00:42:52.838111891Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:52.838789 env[1301]: time="2025-11-01T00:42:52.838743101Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 00:42:52.841029 env[1301]: time="2025-11-01T00:42:52.840986697Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 00:42:54.381131 env[1301]: time="2025-11-01T00:42:54.381039449Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:54.384083 env[1301]: time="2025-11-01T00:42:54.384016916Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:54.386838 env[1301]: time="2025-11-01T00:42:54.386775651Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:54.389510 env[1301]: time="2025-11-01T00:42:54.389463350Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:54.390674 env[1301]: time="2025-11-01T00:42:54.390617198Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 00:42:54.392620 env[1301]: time="2025-11-01T00:42:54.392580798Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 00:42:55.726626 env[1301]: time="2025-11-01T00:42:55.726534376Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:55.729785 env[1301]: time="2025-11-01T00:42:55.729710826Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:55.732674 env[1301]: time="2025-11-01T00:42:55.732613377Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:55.735432 env[1301]: time="2025-11-01T00:42:55.735378854Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:55.737063 env[1301]: time="2025-11-01T00:42:55.737004332Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 00:42:55.738113 env[1301]: time="2025-11-01T00:42:55.738073497Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 00:42:56.909144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1272754889.mount: Deactivated successfully. Nov 1 00:42:57.712823 env[1301]: time="2025-11-01T00:42:57.712736424Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:57.715952 env[1301]: time="2025-11-01T00:42:57.715886085Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:57.718091 env[1301]: time="2025-11-01T00:42:57.718043694Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:57.719948 env[1301]: time="2025-11-01T00:42:57.719905904Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:57.721055 env[1301]: time="2025-11-01T00:42:57.721008213Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 00:42:57.723090 env[1301]: time="2025-11-01T00:42:57.723047594Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 00:42:58.119898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1505724057.mount: Deactivated successfully. Nov 1 00:42:59.376702 env[1301]: time="2025-11-01T00:42:59.376611138Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:59.379922 env[1301]: time="2025-11-01T00:42:59.379860932Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:59.382702 env[1301]: time="2025-11-01T00:42:59.382648546Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:59.385390 env[1301]: time="2025-11-01T00:42:59.385340827Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:59.386453 env[1301]: time="2025-11-01T00:42:59.386404972Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 00:42:59.387464 env[1301]: time="2025-11-01T00:42:59.387418911Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:42:59.760114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1569695625.mount: Deactivated successfully. Nov 1 00:42:59.768290 env[1301]: time="2025-11-01T00:42:59.768214155Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:59.770596 env[1301]: time="2025-11-01T00:42:59.770542762Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:59.772809 env[1301]: time="2025-11-01T00:42:59.772764170Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:59.775017 env[1301]: time="2025-11-01T00:42:59.774953860Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:59.775876 env[1301]: time="2025-11-01T00:42:59.775821072Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 00:42:59.776859 env[1301]: time="2025-11-01T00:42:59.776823078Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 00:43:00.160790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3270824753.mount: Deactivated successfully. Nov 1 00:43:00.323417 kernel: kauditd_printk_skb: 88 callbacks suppressed Nov 1 00:43:00.323631 kernel: audit: type=1130 audit(1761957780.293:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:00.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:00.294386 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:43:00.294725 systemd[1]: Stopped kubelet.service. Nov 1 00:43:00.302421 systemd[1]: Starting kubelet.service... Nov 1 00:43:00.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:00.346401 kernel: audit: type=1131 audit(1761957780.293:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:00.561779 systemd[1]: Started kubelet.service. Nov 1 00:43:00.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:00.584586 kernel: audit: type=1130 audit(1761957780.561:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:00.656070 kubelet[1716]: E1101 00:43:00.655999 1716 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:43:00.684237 kernel: audit: type=1131 audit(1761957780.660:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:43:00.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:43:00.661113 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:43:00.661491 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:43:03.320183 env[1301]: time="2025-11-01T00:43:03.320079199Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:03.323589 env[1301]: time="2025-11-01T00:43:03.323524354Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:03.326696 env[1301]: time="2025-11-01T00:43:03.326638263Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:03.329328 env[1301]: time="2025-11-01T00:43:03.329277728Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:03.330625 env[1301]: time="2025-11-01T00:43:03.330568835Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 00:43:06.214355 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 1 00:43:06.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:06.239293 kernel: audit: type=1131 audit(1761957786.213:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:07.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:07.383709 systemd[1]: Stopped kubelet.service. Nov 1 00:43:07.388418 systemd[1]: Starting kubelet.service... Nov 1 00:43:07.406202 kernel: audit: type=1130 audit(1761957787.383:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:07.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:07.442321 kernel: audit: type=1131 audit(1761957787.383:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:07.441843 systemd[1]: Reloading. Nov 1 00:43:07.596147 /usr/lib/systemd/system-generators/torcx-generator[1774]: time="2025-11-01T00:43:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:43:07.601292 /usr/lib/systemd/system-generators/torcx-generator[1774]: time="2025-11-01T00:43:07Z" level=info msg="torcx already run" Nov 1 00:43:07.726516 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:43:07.727521 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:43:07.758025 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:43:07.889105 systemd[1]: Started kubelet.service. Nov 1 00:43:07.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:07.911190 kernel: audit: type=1130 audit(1761957787.889:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:07.914024 systemd[1]: Stopping kubelet.service... Nov 1 00:43:07.916071 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:43:07.916505 systemd[1]: Stopped kubelet.service. Nov 1 00:43:07.919616 systemd[1]: Starting kubelet.service... Nov 1 00:43:07.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:07.945204 kernel: audit: type=1131 audit(1761957787.916:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:08.209402 systemd[1]: Started kubelet.service. Nov 1 00:43:08.237610 kernel: audit: type=1130 audit(1761957788.212:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:08.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:08.307513 kubelet[1839]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:43:08.307990 kubelet[1839]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:43:08.308075 kubelet[1839]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:43:08.308318 kubelet[1839]: I1101 00:43:08.308273 1839 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:43:08.874198 kubelet[1839]: I1101 00:43:08.874112 1839 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:43:08.874198 kubelet[1839]: I1101 00:43:08.874161 1839 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:43:08.874747 kubelet[1839]: I1101 00:43:08.874717 1839 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:43:08.963485 kubelet[1839]: E1101 00:43:08.963420 1839 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.16:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:43:08.963749 kubelet[1839]: I1101 00:43:08.963440 1839 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:43:08.975630 kubelet[1839]: E1101 00:43:08.975561 1839 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:43:08.975931 kubelet[1839]: I1101 00:43:08.975883 1839 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:43:08.982103 kubelet[1839]: I1101 00:43:08.982057 1839 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:43:08.982926 kubelet[1839]: I1101 00:43:08.982875 1839 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:43:08.983410 kubelet[1839]: I1101 00:43:08.982926 1839 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:43:08.983665 kubelet[1839]: I1101 00:43:08.983433 1839 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:43:08.983665 kubelet[1839]: I1101 00:43:08.983453 1839 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:43:08.983665 kubelet[1839]: I1101 00:43:08.983649 1839 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:43:08.991675 kubelet[1839]: I1101 00:43:08.991627 1839 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:43:08.991852 kubelet[1839]: I1101 00:43:08.991697 1839 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:43:08.991852 kubelet[1839]: I1101 00:43:08.991736 1839 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:43:08.991852 kubelet[1839]: I1101 00:43:08.991755 1839 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:43:09.003300 kubelet[1839]: W1101 00:43:09.002917 1839 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762&limit=500&resourceVersion=0": dial tcp 10.128.0.16:6443: connect: connection refused Nov 1 00:43:09.003300 kubelet[1839]: E1101 00:43:09.003045 1839 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762&limit=500&resourceVersion=0\": dial tcp 10.128.0.16:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:43:09.003300 kubelet[1839]: W1101 00:43:09.003282 1839 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.16:6443: connect: connection refused Nov 1 00:43:09.003722 kubelet[1839]: E1101 00:43:09.003356 1839 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.16:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:43:09.003722 kubelet[1839]: I1101 00:43:09.003509 1839 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:43:09.004893 kubelet[1839]: I1101 00:43:09.004337 1839 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:43:09.007886 kubelet[1839]: W1101 00:43:09.007839 1839 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:43:09.017435 kubelet[1839]: I1101 00:43:09.017379 1839 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:43:09.017636 kubelet[1839]: I1101 00:43:09.017471 1839 server.go:1287] "Started kubelet" Nov 1 00:43:09.019000 audit[1839]: AVC avc: denied { mac_admin } for pid=1839 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:09.034530 kubelet[1839]: I1101 00:43:09.019906 1839 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Nov 1 00:43:09.034530 kubelet[1839]: I1101 00:43:09.019986 1839 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Nov 1 00:43:09.034530 kubelet[1839]: I1101 00:43:09.020126 1839 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:43:09.034530 kubelet[1839]: I1101 00:43:09.029906 1839 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:43:09.034530 kubelet[1839]: I1101 00:43:09.031534 1839 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:43:09.048243 kernel: audit: type=1400 audit(1761957789.019:195): avc: denied { mac_admin } for pid=1839 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:09.048484 kernel: audit: type=1401 audit(1761957789.019:195): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:43:09.019000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:43:09.051822 kubelet[1839]: I1101 00:43:09.051705 1839 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:43:09.052478 kubelet[1839]: I1101 00:43:09.052453 1839 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:43:09.053723 kernel: audit: type=1300 audit(1761957789.019:195): arch=c000003e syscall=188 success=no exit=-22 a0=c0007b0fc0 a1=c0008d8ed0 a2=c0007b0f90 a3=25 items=0 ppid=1 pid=1839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:09.019000 audit[1839]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0007b0fc0 a1=c0008d8ed0 a2=c0007b0f90 a3=25 items=0 ppid=1 pid=1839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:09.054429 kubelet[1839]: I1101 00:43:09.054399 1839 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:43:09.058627 kubelet[1839]: I1101 00:43:09.058598 1839 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:43:09.061121 kubelet[1839]: E1101 00:43:09.061087 1839 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" not found" Nov 1 00:43:09.063829 kubelet[1839]: I1101 00:43:09.063805 1839 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:43:09.063973 kubelet[1839]: E1101 00:43:09.047805 1839 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.16:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.16:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762.1873bb461b9d8f20 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762,UID:ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762,},FirstTimestamp:2025-11-01 00:43:09.01741136 +0000 UTC m=+0.792737458,LastTimestamp:2025-11-01 00:43:09.01741136 +0000 UTC m=+0.792737458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762,}" Nov 1 00:43:09.064263 kubelet[1839]: I1101 00:43:09.064247 1839 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:43:09.064992 kubelet[1839]: W1101 00:43:09.064934 1839 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.16:6443: connect: connection refused Nov 1 00:43:09.065158 kubelet[1839]: E1101 00:43:09.065130 1839 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.16:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:43:09.065433 kubelet[1839]: E1101 00:43:09.065384 1839 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762?timeout=10s\": dial tcp 10.128.0.16:6443: connect: connection refused" interval="200ms" Nov 1 00:43:09.019000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:43:09.117198 kernel: audit: type=1327 audit(1761957789.019:195): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:43:09.019000 audit[1839]: AVC avc: denied { mac_admin } for pid=1839 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:09.019000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:43:09.019000 audit[1839]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000938360 a1=c0008d8ee8 a2=c0007b1140 a3=25 items=0 ppid=1 pid=1839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:09.019000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:43:09.025000 audit[1850]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1850 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:09.025000 audit[1850]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe0326e5e0 a2=0 a3=7ffe0326e5cc items=0 ppid=1839 pid=1850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:09.025000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Nov 1 00:43:09.027000 audit[1851]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1851 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:09.027000 audit[1851]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9abb3bf0 a2=0 a3=7fff9abb3bdc items=0 ppid=1839 pid=1851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:09.027000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Nov 1 00:43:09.064000 audit[1853]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1853 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:09.064000 audit[1853]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff8edfdaf0 a2=0 a3=7fff8edfdadc items=0 ppid=1839 pid=1853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:09.064000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:43:09.069000 audit[1855]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1855 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:09.069000 audit[1855]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffe1a15830 a2=0 a3=7fffe1a1581c items=0 ppid=1839 pid=1855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:09.069000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:43:09.119715 kubelet[1839]: I1101 00:43:09.119671 1839 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:43:09.120118 kubelet[1839]: I1101 00:43:09.120061 1839 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:43:09.123000 audit[1860]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1860 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:09.123000 audit[1860]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc79743f60 a2=0 a3=7ffc79743f4c items=0 ppid=1839 pid=1860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:09.123000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Nov 1 00:43:09.124368 kubelet[1839]: I1101 00:43:09.124262 1839 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:43:09.124510 kubelet[1839]: I1101 00:43:09.124268 1839 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:43:09.126000 audit[1861]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1861 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:09.126000 audit[1861]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe328821e0 a2=0 a3=7ffe328821cc items=0 ppid=1839 pid=1861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:09.126000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Nov 1 00:43:09.127267 kubelet[1839]: I1101 00:43:09.127237 1839 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:43:09.127430 kubelet[1839]: I1101 00:43:09.127411 1839 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:43:09.127558 kubelet[1839]: I1101 00:43:09.127540 1839 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:43:09.127661 kubelet[1839]: I1101 00:43:09.127647 1839 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:43:09.127845 kubelet[1839]: E1101 00:43:09.127817 1839 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:43:09.129000 audit[1862]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1862 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:09.129000 audit[1862]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc06c77d40 a2=0 a3=7ffc06c77d2c items=0 ppid=1839 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:09.129000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Nov 1 00:43:09.132321 kubelet[1839]: W1101 00:43:09.131907 1839 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.16:6443: connect: connection refused Nov 1 00:43:09.132321 kubelet[1839]: E1101 00:43:09.131994 1839 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.16:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:43:09.133000 audit[1863]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1863 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:09.133000 audit[1863]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc99de2d50 a2=0 a3=7ffc99de2d3c items=0 ppid=1839 pid=1863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:09.133000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Nov 1 00:43:09.135000 audit[1864]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1864 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:09.135000 audit[1864]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc270b5400 a2=0 a3=7ffc270b53ec items=0 ppid=1839 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:09.135000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Nov 1 00:43:09.137000 audit[1865]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1865 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:09.137000 audit[1865]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffe2a2f0eb0 a2=0 a3=7ffe2a2f0e9c items=0 ppid=1839 pid=1865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:09.137000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Nov 1 00:43:09.140000 audit[1866]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=1866 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:09.140000 audit[1866]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdec5908d0 a2=0 a3=7ffdec5908bc items=0 ppid=1839 pid=1866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:09.140000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Nov 1 00:43:09.143000 audit[1867]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1867 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:09.143000 audit[1867]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe80a954e0 a2=0 a3=7ffe80a954cc items=0 ppid=1839 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:09.143000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Nov 1 00:43:09.165854 kubelet[1839]: E1101 00:43:09.165791 1839 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" not found" Nov 1 00:43:09.183782 kubelet[1839]: I1101 00:43:09.183745 1839 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:43:09.184114 kubelet[1839]: I1101 00:43:09.184065 1839 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:43:09.184276 kubelet[1839]: I1101 00:43:09.184248 1839 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:43:09.186920 kubelet[1839]: I1101 00:43:09.186873 1839 policy_none.go:49] "None policy: Start" Nov 1 00:43:09.186920 kubelet[1839]: I1101 00:43:09.186908 1839 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:43:09.187157 kubelet[1839]: I1101 00:43:09.186932 1839 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:43:09.193816 kubelet[1839]: I1101 00:43:09.193762 1839 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:43:09.193000 audit[1839]: AVC avc: denied { mac_admin } for pid=1839 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:09.193000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:43:09.193000 audit[1839]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000058de0 a1=c000bfc5a0 a2=c000058d80 a3=25 items=0 ppid=1 pid=1839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:09.193000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:43:09.194455 kubelet[1839]: I1101 00:43:09.193874 1839 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Nov 1 00:43:09.194455 kubelet[1839]: I1101 00:43:09.194061 1839 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:43:09.194455 kubelet[1839]: I1101 00:43:09.194080 1839 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:43:09.196994 kubelet[1839]: I1101 00:43:09.196924 1839 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:43:09.200698 kubelet[1839]: E1101 00:43:09.200649 1839 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:43:09.200844 kubelet[1839]: E1101 00:43:09.200714 1839 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" not found" Nov 1 00:43:09.246111 kubelet[1839]: E1101 00:43:09.246037 1839 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" not found" node="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:09.253281 kubelet[1839]: E1101 00:43:09.253216 1839 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" not found" node="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:09.258047 kubelet[1839]: E1101 00:43:09.257986 1839 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" not found" node="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:09.266542 kubelet[1839]: I1101 00:43:09.266480 1839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ced363ada052640ee01f3338c1d791f8-ca-certs\") pod \"kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" (UID: \"ced363ada052640ee01f3338c1d791f8\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:09.266803 kubelet[1839]: I1101 00:43:09.266747 1839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ced363ada052640ee01f3338c1d791f8-k8s-certs\") pod \"kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" (UID: \"ced363ada052640ee01f3338c1d791f8\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:09.266888 kubelet[1839]: I1101 00:43:09.266800 1839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ced363ada052640ee01f3338c1d791f8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" (UID: \"ced363ada052640ee01f3338c1d791f8\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:09.266888 kubelet[1839]: I1101 00:43:09.266835 1839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b65cfd9397e90744ae02bdf0cf5f4cf7-ca-certs\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" (UID: \"b65cfd9397e90744ae02bdf0cf5f4cf7\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:09.266888 kubelet[1839]: I1101 00:43:09.266867 1839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b65cfd9397e90744ae02bdf0cf5f4cf7-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" (UID: \"b65cfd9397e90744ae02bdf0cf5f4cf7\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:09.267061 kubelet[1839]: I1101 00:43:09.266899 1839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b65cfd9397e90744ae02bdf0cf5f4cf7-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" (UID: \"b65cfd9397e90744ae02bdf0cf5f4cf7\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:09.267061 kubelet[1839]: I1101 00:43:09.266930 1839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b65cfd9397e90744ae02bdf0cf5f4cf7-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" (UID: \"b65cfd9397e90744ae02bdf0cf5f4cf7\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:09.267061 kubelet[1839]: I1101 00:43:09.266964 1839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b65cfd9397e90744ae02bdf0cf5f4cf7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" (UID: \"b65cfd9397e90744ae02bdf0cf5f4cf7\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:09.267061 kubelet[1839]: I1101 00:43:09.266998 1839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e4e4e386ab0b4a4a7500dd57b05a963-kubeconfig\") pod \"kube-scheduler-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" (UID: \"7e4e4e386ab0b4a4a7500dd57b05a963\") " pod="kube-system/kube-scheduler-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:09.267427 kubelet[1839]: E1101 00:43:09.267376 1839 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762?timeout=10s\": dial tcp 10.128.0.16:6443: connect: connection refused" interval="400ms" Nov 1 00:43:09.299560 kubelet[1839]: I1101 00:43:09.299515 1839 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:09.300402 kubelet[1839]: E1101 00:43:09.300341 1839 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.16:6443/api/v1/nodes\": dial tcp 10.128.0.16:6443: connect: connection refused" node="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:09.508662 kubelet[1839]: I1101 00:43:09.507880 1839 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:09.508662 kubelet[1839]: E1101 00:43:09.508565 1839 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.16:6443/api/v1/nodes\": dial tcp 10.128.0.16:6443: connect: connection refused" node="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:09.548164 env[1301]: time="2025-11-01T00:43:09.548053718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762,Uid:ced363ada052640ee01f3338c1d791f8,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:09.555266 env[1301]: time="2025-11-01T00:43:09.555206327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762,Uid:b65cfd9397e90744ae02bdf0cf5f4cf7,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:09.560237 env[1301]: time="2025-11-01T00:43:09.560156683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762,Uid:7e4e4e386ab0b4a4a7500dd57b05a963,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:09.668914 kubelet[1839]: E1101 00:43:09.668839 1839 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762?timeout=10s\": dial tcp 10.128.0.16:6443: connect: connection refused" interval="800ms" Nov 1 00:43:09.866145 kubelet[1839]: W1101 00:43:09.866012 1839 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.16:6443: connect: connection refused Nov 1 00:43:09.866568 kubelet[1839]: E1101 00:43:09.866525 1839 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.16:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:43:09.914480 kubelet[1839]: I1101 00:43:09.914426 1839 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:09.915034 kubelet[1839]: E1101 00:43:09.914983 1839 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.16:6443/api/v1/nodes\": dial tcp 10.128.0.16:6443: connect: connection refused" node="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:09.972920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2523882001.mount: Deactivated successfully. Nov 1 00:43:09.986462 env[1301]: time="2025-11-01T00:43:09.986371859Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:09.987916 env[1301]: time="2025-11-01T00:43:09.987844919Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:09.991879 env[1301]: time="2025-11-01T00:43:09.991820844Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:09.993261 env[1301]: time="2025-11-01T00:43:09.993209247Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:09.995800 env[1301]: time="2025-11-01T00:43:09.995732122Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:09.997080 env[1301]: time="2025-11-01T00:43:09.997022690Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:09.998936 env[1301]: time="2025-11-01T00:43:09.998877039Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:10.001692 env[1301]: time="2025-11-01T00:43:10.001636455Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:10.004312 env[1301]: time="2025-11-01T00:43:10.004264215Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:10.007315 env[1301]: time="2025-11-01T00:43:10.007259821Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:10.008804 env[1301]: time="2025-11-01T00:43:10.008749748Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:10.009794 env[1301]: time="2025-11-01T00:43:10.009754535Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:10.056485 env[1301]: time="2025-11-01T00:43:10.056115610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:10.056485 env[1301]: time="2025-11-01T00:43:10.056210702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:10.056485 env[1301]: time="2025-11-01T00:43:10.056234123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:10.057432 env[1301]: time="2025-11-01T00:43:10.057324401Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/96f714257de35166dfec9c12a85246aaea03316028d66706b0fdad03873793cb pid=1878 runtime=io.containerd.runc.v2 Nov 1 00:43:10.076430 env[1301]: time="2025-11-01T00:43:10.076303407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:10.076430 env[1301]: time="2025-11-01T00:43:10.076446707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:10.076732 env[1301]: time="2025-11-01T00:43:10.076489988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:10.076846 env[1301]: time="2025-11-01T00:43:10.076789045Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c9f98b6787b43e238930d1a54919f5a1b595241e366d9647a47f3601125d238 pid=1895 runtime=io.containerd.runc.v2 Nov 1 00:43:10.100136 env[1301]: time="2025-11-01T00:43:10.099985600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:10.100136 env[1301]: time="2025-11-01T00:43:10.100071555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:10.100530 env[1301]: time="2025-11-01T00:43:10.100113253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:10.100530 env[1301]: time="2025-11-01T00:43:10.100413341Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/570cb70c970ed0f37cb64b5c52134a6319e232cc93895f2b53a27d22fb9106bc pid=1921 runtime=io.containerd.runc.v2 Nov 1 00:43:10.265540 env[1301]: time="2025-11-01T00:43:10.265357958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762,Uid:b65cfd9397e90744ae02bdf0cf5f4cf7,Namespace:kube-system,Attempt:0,} returns sandbox id \"96f714257de35166dfec9c12a85246aaea03316028d66706b0fdad03873793cb\"" Nov 1 00:43:10.270645 kubelet[1839]: E1101 00:43:10.270154 1839 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bf" Nov 1 00:43:10.272395 env[1301]: time="2025-11-01T00:43:10.272340197Z" level=info msg="CreateContainer within sandbox \"96f714257de35166dfec9c12a85246aaea03316028d66706b0fdad03873793cb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:43:10.289481 env[1301]: time="2025-11-01T00:43:10.289409437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762,Uid:ced363ada052640ee01f3338c1d791f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c9f98b6787b43e238930d1a54919f5a1b595241e366d9647a47f3601125d238\"" Nov 1 00:43:10.291949 kubelet[1839]: E1101 00:43:10.291889 1839 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd" Nov 1 00:43:10.295663 env[1301]: time="2025-11-01T00:43:10.295600906Z" level=info msg="CreateContainer within sandbox \"0c9f98b6787b43e238930d1a54919f5a1b595241e366d9647a47f3601125d238\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:43:10.299721 env[1301]: time="2025-11-01T00:43:10.299665146Z" level=info msg="CreateContainer within sandbox \"96f714257de35166dfec9c12a85246aaea03316028d66706b0fdad03873793cb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cf0f0cee9241438348a601df610588d70f5bc0b95c821c51b355073df7c02ed3\"" Nov 1 00:43:10.305334 kubelet[1839]: W1101 00:43:10.305068 1839 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.16:6443: connect: connection refused Nov 1 00:43:10.305334 kubelet[1839]: E1101 00:43:10.305287 1839 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.16:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:43:10.313606 env[1301]: time="2025-11-01T00:43:10.313535514Z" level=info msg="CreateContainer within sandbox \"0c9f98b6787b43e238930d1a54919f5a1b595241e366d9647a47f3601125d238\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9347b6b3fb23a56d9ddae1c1b0aaf39d8f68a5819b5e01a9f32db6cf5dc179b1\"" Nov 1 00:43:10.314724 env[1301]: time="2025-11-01T00:43:10.314686001Z" level=info msg="StartContainer for \"cf0f0cee9241438348a601df610588d70f5bc0b95c821c51b355073df7c02ed3\"" Nov 1 00:43:10.317221 env[1301]: time="2025-11-01T00:43:10.317163956Z" level=info msg="StartContainer for \"9347b6b3fb23a56d9ddae1c1b0aaf39d8f68a5819b5e01a9f32db6cf5dc179b1\"" Nov 1 00:43:10.334631 env[1301]: time="2025-11-01T00:43:10.334558964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762,Uid:7e4e4e386ab0b4a4a7500dd57b05a963,Namespace:kube-system,Attempt:0,} returns sandbox id \"570cb70c970ed0f37cb64b5c52134a6319e232cc93895f2b53a27d22fb9106bc\"" Nov 1 00:43:10.336872 kubelet[1839]: E1101 00:43:10.336811 1839 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd" Nov 1 00:43:10.342336 env[1301]: time="2025-11-01T00:43:10.342276171Z" level=info msg="CreateContainer within sandbox \"570cb70c970ed0f37cb64b5c52134a6319e232cc93895f2b53a27d22fb9106bc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:43:10.362820 env[1301]: time="2025-11-01T00:43:10.362730312Z" level=info msg="CreateContainer within sandbox \"570cb70c970ed0f37cb64b5c52134a6319e232cc93895f2b53a27d22fb9106bc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"74c5a9ee018fbfd5ca1afb01d4b2fbdd22b9a6ea502e8170e0786e3d4247da6d\"" Nov 1 00:43:10.363606 env[1301]: time="2025-11-01T00:43:10.363560052Z" level=info msg="StartContainer for \"74c5a9ee018fbfd5ca1afb01d4b2fbdd22b9a6ea502e8170e0786e3d4247da6d\"" Nov 1 00:43:10.444997 kubelet[1839]: W1101 00:43:10.444902 1839 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762&limit=500&resourceVersion=0": dial tcp 10.128.0.16:6443: connect: connection refused Nov 1 00:43:10.445298 kubelet[1839]: E1101 00:43:10.445021 1839 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762&limit=500&resourceVersion=0\": dial tcp 10.128.0.16:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:43:10.469811 kubelet[1839]: E1101 00:43:10.469748 1839 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762?timeout=10s\": dial tcp 10.128.0.16:6443: connect: connection refused" interval="1.6s" Nov 1 00:43:10.515123 env[1301]: time="2025-11-01T00:43:10.515047626Z" level=info msg="StartContainer for \"9347b6b3fb23a56d9ddae1c1b0aaf39d8f68a5819b5e01a9f32db6cf5dc179b1\" returns successfully" Nov 1 00:43:10.523978 kubelet[1839]: W1101 00:43:10.522357 1839 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.16:6443: connect: connection refused Nov 1 00:43:10.524950 kubelet[1839]: E1101 00:43:10.524856 1839 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.16:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:43:10.584511 env[1301]: time="2025-11-01T00:43:10.584426361Z" level=info msg="StartContainer for \"cf0f0cee9241438348a601df610588d70f5bc0b95c821c51b355073df7c02ed3\" returns successfully" Nov 1 00:43:10.633231 env[1301]: time="2025-11-01T00:43:10.633134549Z" level=info msg="StartContainer for \"74c5a9ee018fbfd5ca1afb01d4b2fbdd22b9a6ea502e8170e0786e3d4247da6d\" returns successfully" Nov 1 00:43:10.722343 kubelet[1839]: I1101 00:43:10.721661 1839 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:11.162195 kubelet[1839]: E1101 00:43:11.162138 1839 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" not found" node="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:11.163141 kubelet[1839]: E1101 00:43:11.163109 1839 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" not found" node="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:11.167218 kubelet[1839]: E1101 00:43:11.164445 1839 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" not found" node="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:12.169419 kubelet[1839]: E1101 00:43:12.169370 1839 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" not found" node="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:12.171054 kubelet[1839]: E1101 00:43:12.171015 1839 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" not found" node="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:13.695868 kubelet[1839]: E1101 00:43:13.695819 1839 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" not found" node="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:13.742044 kubelet[1839]: E1101 00:43:13.741998 1839 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" not found" node="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:14.321479 kubelet[1839]: I1101 00:43:14.321418 1839 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:14.354663 kubelet[1839]: E1101 00:43:14.354456 1839 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762.1873bb461b9d8f20 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762,UID:ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762,},FirstTimestamp:2025-11-01 00:43:09.01741136 +0000 UTC m=+0.792737458,LastTimestamp:2025-11-01 00:43:09.01741136 +0000 UTC m=+0.792737458,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762,}" Nov 1 00:43:14.363358 kubelet[1839]: I1101 00:43:14.363310 1839 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:14.422145 kubelet[1839]: E1101 00:43:14.422062 1839 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Nov 1 00:43:14.423898 kubelet[1839]: E1101 00:43:14.423842 1839 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:14.424125 kubelet[1839]: I1101 00:43:14.424104 1839 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:14.434195 kubelet[1839]: E1101 00:43:14.434138 1839 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:14.434510 kubelet[1839]: I1101 00:43:14.434486 1839 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:14.444619 kubelet[1839]: E1101 00:43:14.444550 1839 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:15.001269 kubelet[1839]: I1101 00:43:15.001216 1839 apiserver.go:52] "Watching apiserver" Nov 1 00:43:15.064612 kubelet[1839]: I1101 00:43:15.064550 1839 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:43:16.241845 systemd[1]: Reloading. Nov 1 00:43:16.367688 /usr/lib/systemd/system-generators/torcx-generator[2134]: time="2025-11-01T00:43:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:43:16.367745 /usr/lib/systemd/system-generators/torcx-generator[2134]: time="2025-11-01T00:43:16Z" level=info msg="torcx already run" Nov 1 00:43:16.488131 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:43:16.488164 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:43:16.513703 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:43:16.664120 systemd[1]: Stopping kubelet.service... Nov 1 00:43:16.686166 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:43:16.687048 systemd[1]: Stopped kubelet.service. Nov 1 00:43:16.714425 kernel: kauditd_printk_skb: 44 callbacks suppressed Nov 1 00:43:16.714651 kernel: audit: type=1131 audit(1761957796.686:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:16.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:16.715056 systemd[1]: Starting kubelet.service... Nov 1 00:43:17.034968 systemd[1]: Started kubelet.service. Nov 1 00:43:17.066818 kernel: audit: type=1130 audit(1761957797.038:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:17.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:17.164422 kubelet[2193]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:43:17.165308 kubelet[2193]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:43:17.165484 kubelet[2193]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:43:17.165792 kubelet[2193]: I1101 00:43:17.165690 2193 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:43:17.182837 kubelet[2193]: I1101 00:43:17.182763 2193 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:43:17.182837 kubelet[2193]: I1101 00:43:17.182810 2193 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:43:17.183978 kubelet[2193]: I1101 00:43:17.183933 2193 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:43:17.188497 kubelet[2193]: I1101 00:43:17.188451 2193 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 00:43:17.196804 kubelet[2193]: I1101 00:43:17.196747 2193 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:43:17.202575 kubelet[2193]: E1101 00:43:17.202510 2193 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:43:17.202575 kubelet[2193]: I1101 00:43:17.202561 2193 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:43:17.212223 kubelet[2193]: I1101 00:43:17.210041 2193 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:43:17.212223 kubelet[2193]: I1101 00:43:17.211781 2193 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:43:17.212653 kubelet[2193]: I1101 00:43:17.211875 2193 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:43:17.212653 kubelet[2193]: I1101 00:43:17.212546 2193 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:43:17.212653 kubelet[2193]: I1101 00:43:17.212568 2193 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:43:17.212653 kubelet[2193]: I1101 00:43:17.212653 2193 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:43:17.212987 kubelet[2193]: I1101 00:43:17.212919 2193 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:43:17.225231 kubelet[2193]: I1101 00:43:17.223871 2193 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:43:17.225231 kubelet[2193]: I1101 00:43:17.223981 2193 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:43:17.225231 kubelet[2193]: I1101 00:43:17.224004 2193 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:43:17.231764 kubelet[2193]: I1101 00:43:17.231722 2193 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:43:17.232843 kubelet[2193]: I1101 00:43:17.232815 2193 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:43:17.233778 kubelet[2193]: I1101 00:43:17.233754 2193 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:43:17.233986 kubelet[2193]: I1101 00:43:17.233969 2193 server.go:1287] "Started kubelet" Nov 1 00:43:17.243000 audit[2193]: AVC avc: denied { mac_admin } for pid=2193 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:17.266524 kubelet[2193]: I1101 00:43:17.245243 2193 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Nov 1 00:43:17.266524 kubelet[2193]: I1101 00:43:17.245409 2193 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Nov 1 00:43:17.266524 kubelet[2193]: I1101 00:43:17.245519 2193 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:43:17.266524 kubelet[2193]: I1101 00:43:17.249479 2193 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:43:17.266524 kubelet[2193]: I1101 00:43:17.252683 2193 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:43:17.266524 kubelet[2193]: I1101 00:43:17.259702 2193 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:43:17.266524 kubelet[2193]: I1101 00:43:17.260094 2193 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:43:17.266524 kubelet[2193]: I1101 00:43:17.260995 2193 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:43:17.266524 kubelet[2193]: I1101 00:43:17.265927 2193 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:43:17.268403 kernel: audit: type=1400 audit(1761957797.243:212): avc: denied { mac_admin } for pid=2193 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:17.268546 kernel: audit: type=1401 audit(1761957797.243:212): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:43:17.243000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:43:17.274613 kubelet[2193]: I1101 00:43:17.274575 2193 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:43:17.275045 kubelet[2193]: I1101 00:43:17.275028 2193 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:43:17.277415 kubelet[2193]: E1101 00:43:17.266162 2193 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" not found" Nov 1 00:43:17.285891 kubelet[2193]: I1101 00:43:17.285731 2193 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:43:17.287838 kubelet[2193]: I1101 00:43:17.287793 2193 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:43:17.243000 audit[2193]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b5d3b0 a1=c00088c8b8 a2=c000b5d380 a3=25 items=0 ppid=1 pid=2193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:17.314586 kubelet[2193]: E1101 00:43:17.314522 2193 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:43:17.315488 kubelet[2193]: I1101 00:43:17.315465 2193 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:43:17.243000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:43:17.364804 kernel: audit: type=1300 audit(1761957797.243:212): arch=c000003e syscall=188 success=no exit=-22 a0=c000b5d3b0 a1=c00088c8b8 a2=c000b5d380 a3=25 items=0 ppid=1 pid=2193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:17.365017 kernel: audit: type=1327 audit(1761957797.243:212): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:43:17.244000 audit[2193]: AVC avc: denied { mac_admin } for pid=2193 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:17.384356 kubelet[2193]: I1101 00:43:17.384285 2193 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:43:17.388206 kernel: audit: type=1400 audit(1761957797.244:213): avc: denied { mac_admin } for pid=2193 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:17.388722 kubelet[2193]: I1101 00:43:17.388684 2193 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:43:17.388964 kubelet[2193]: I1101 00:43:17.388945 2193 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:43:17.389113 kubelet[2193]: I1101 00:43:17.389094 2193 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:43:17.389245 kubelet[2193]: I1101 00:43:17.389224 2193 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:43:17.389429 kubelet[2193]: E1101 00:43:17.389400 2193 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:43:17.244000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:43:17.402206 kernel: audit: type=1401 audit(1761957797.244:213): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:43:17.244000 audit[2193]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b24f20 a1=c00088c8d0 a2=c000b5d440 a3=25 items=0 ppid=1 pid=2193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:17.244000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:43:17.470547 kernel: audit: type=1300 audit(1761957797.244:213): arch=c000003e syscall=188 success=no exit=-22 a0=c000b24f20 a1=c00088c8d0 a2=c000b5d440 a3=25 items=0 ppid=1 pid=2193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:17.470771 kernel: audit: type=1327 audit(1761957797.244:213): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:43:17.494263 kubelet[2193]: E1101 00:43:17.494199 2193 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:43:17.513551 kubelet[2193]: I1101 00:43:17.513512 2193 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:43:17.514000 kubelet[2193]: I1101 00:43:17.513972 2193 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:43:17.514159 kubelet[2193]: I1101 00:43:17.514143 2193 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:43:17.514578 kubelet[2193]: I1101 00:43:17.514552 2193 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:43:17.514780 kubelet[2193]: I1101 00:43:17.514719 2193 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:43:17.514892 kubelet[2193]: I1101 00:43:17.514878 2193 policy_none.go:49] "None policy: Start" Nov 1 00:43:17.514999 kubelet[2193]: I1101 00:43:17.514985 2193 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:43:17.515098 kubelet[2193]: I1101 00:43:17.515085 2193 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:43:17.515536 kubelet[2193]: I1101 00:43:17.515515 2193 state_mem.go:75] "Updated machine memory state" Nov 1 00:43:17.517723 kubelet[2193]: I1101 00:43:17.517694 2193 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:43:17.516000 audit[2193]: AVC avc: denied { mac_admin } for pid=2193 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:17.516000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:43:17.516000 audit[2193]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009370b0 a1=c0009835d8 a2=c000937080 a3=25 items=0 ppid=1 pid=2193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:17.516000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:43:17.518555 kubelet[2193]: I1101 00:43:17.518522 2193 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Nov 1 00:43:17.518888 kubelet[2193]: I1101 00:43:17.518869 2193 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:43:17.519053 kubelet[2193]: I1101 00:43:17.519004 2193 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:43:17.520915 kubelet[2193]: I1101 00:43:17.520892 2193 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:43:17.525706 kubelet[2193]: E1101 00:43:17.525636 2193 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:43:17.638609 kubelet[2193]: I1101 00:43:17.638556 2193 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:17.649460 kubelet[2193]: I1101 00:43:17.648523 2193 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:17.649460 kubelet[2193]: I1101 00:43:17.648635 2193 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:17.695935 kubelet[2193]: I1101 00:43:17.695890 2193 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:17.702587 kubelet[2193]: I1101 00:43:17.696693 2193 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:17.702587 kubelet[2193]: I1101 00:43:17.697059 2193 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:17.704169 kubelet[2193]: W1101 00:43:17.702912 2193 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Nov 1 00:43:17.704582 kubelet[2193]: W1101 00:43:17.704553 2193 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Nov 1 00:43:17.705065 kubelet[2193]: W1101 00:43:17.705040 2193 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Nov 1 00:43:17.785760 kubelet[2193]: I1101 00:43:17.785304 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ced363ada052640ee01f3338c1d791f8-ca-certs\") pod \"kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" (UID: \"ced363ada052640ee01f3338c1d791f8\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:17.785760 kubelet[2193]: I1101 00:43:17.785365 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ced363ada052640ee01f3338c1d791f8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" (UID: \"ced363ada052640ee01f3338c1d791f8\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:17.785760 kubelet[2193]: I1101 00:43:17.785407 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b65cfd9397e90744ae02bdf0cf5f4cf7-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" (UID: \"b65cfd9397e90744ae02bdf0cf5f4cf7\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:17.785760 kubelet[2193]: I1101 00:43:17.785444 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e4e4e386ab0b4a4a7500dd57b05a963-kubeconfig\") pod \"kube-scheduler-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" (UID: \"7e4e4e386ab0b4a4a7500dd57b05a963\") " pod="kube-system/kube-scheduler-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:17.786093 kubelet[2193]: I1101 00:43:17.785478 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ced363ada052640ee01f3338c1d791f8-k8s-certs\") pod \"kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" (UID: \"ced363ada052640ee01f3338c1d791f8\") " pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:17.786093 kubelet[2193]: I1101 00:43:17.785509 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b65cfd9397e90744ae02bdf0cf5f4cf7-ca-certs\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" (UID: \"b65cfd9397e90744ae02bdf0cf5f4cf7\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:17.786093 kubelet[2193]: I1101 00:43:17.785539 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b65cfd9397e90744ae02bdf0cf5f4cf7-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" (UID: \"b65cfd9397e90744ae02bdf0cf5f4cf7\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:17.786093 kubelet[2193]: I1101 00:43:17.785569 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b65cfd9397e90744ae02bdf0cf5f4cf7-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" (UID: \"b65cfd9397e90744ae02bdf0cf5f4cf7\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:17.786308 kubelet[2193]: I1101 00:43:17.785615 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b65cfd9397e90744ae02bdf0cf5f4cf7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" (UID: \"b65cfd9397e90744ae02bdf0cf5f4cf7\") " pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:18.231834 kubelet[2193]: I1101 00:43:18.231773 2193 apiserver.go:52] "Watching apiserver" Nov 1 00:43:18.275133 kubelet[2193]: I1101 00:43:18.275085 2193 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:43:18.423762 kubelet[2193]: I1101 00:43:18.423721 2193 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:18.434401 kubelet[2193]: W1101 00:43:18.434358 2193 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters] Nov 1 00:43:18.434694 kubelet[2193]: E1101 00:43:18.434657 2193 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:43:18.459842 kubelet[2193]: I1101 00:43:18.459758 2193 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" podStartSLOduration=1.459739895 podStartE2EDuration="1.459739895s" podCreationTimestamp="2025-11-01 00:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:43:18.45927842 +0000 UTC m=+1.406379806" watchObservedRunningTime="2025-11-01 00:43:18.459739895 +0000 UTC m=+1.406841301" Nov 1 00:43:18.484821 kubelet[2193]: I1101 00:43:18.484637 2193 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" podStartSLOduration=1.484614729 podStartE2EDuration="1.484614729s" podCreationTimestamp="2025-11-01 00:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:43:18.472077716 +0000 UTC m=+1.419179101" watchObservedRunningTime="2025-11-01 00:43:18.484614729 +0000 UTC m=+1.431716111" Nov 1 00:43:20.745359 update_engine[1287]: I1101 00:43:20.745275 1287 update_attempter.cc:509] Updating boot flags... Nov 1 00:43:23.143543 kubelet[2193]: I1101 00:43:23.143287 2193 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:43:23.144847 env[1301]: time="2025-11-01T00:43:23.144792484Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:43:23.146227 kubelet[2193]: I1101 00:43:23.145860 2193 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:43:23.563847 kubelet[2193]: I1101 00:43:23.563734 2193 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" podStartSLOduration=6.563646563 podStartE2EDuration="6.563646563s" podCreationTimestamp="2025-11-01 00:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:43:18.484563432 +0000 UTC m=+1.431664813" watchObservedRunningTime="2025-11-01 00:43:23.563646563 +0000 UTC m=+6.510747942" Nov 1 00:43:23.571644 kubelet[2193]: W1101 00:43:23.571594 2193 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' and this object Nov 1 00:43:23.572009 kubelet[2193]: E1101 00:43:23.571969 2193 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' and this object" logger="UnhandledError" Nov 1 00:43:23.572378 kubelet[2193]: I1101 00:43:23.572321 2193 status_manager.go:890] "Failed to get status for pod" podUID="0298c754-9c20-445e-a53e-ffa3a30ff696" pod="kube-system/kube-proxy-glbnp" err="pods \"kube-proxy-glbnp\" is forbidden: User \"system:node:ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' and this object" Nov 1 00:43:23.572606 kubelet[2193]: W1101 00:43:23.572582 2193 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' and this object Nov 1 00:43:23.572784 kubelet[2193]: E1101 00:43:23.572755 2193 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' and this object" logger="UnhandledError" Nov 1 00:43:23.625581 kubelet[2193]: I1101 00:43:23.625509 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0298c754-9c20-445e-a53e-ffa3a30ff696-lib-modules\") pod \"kube-proxy-glbnp\" (UID: \"0298c754-9c20-445e-a53e-ffa3a30ff696\") " pod="kube-system/kube-proxy-glbnp" Nov 1 00:43:23.625581 kubelet[2193]: I1101 00:43:23.625592 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwrzq\" (UniqueName: \"kubernetes.io/projected/0298c754-9c20-445e-a53e-ffa3a30ff696-kube-api-access-wwrzq\") pod \"kube-proxy-glbnp\" (UID: \"0298c754-9c20-445e-a53e-ffa3a30ff696\") " pod="kube-system/kube-proxy-glbnp" Nov 1 00:43:23.625923 kubelet[2193]: I1101 00:43:23.625627 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0298c754-9c20-445e-a53e-ffa3a30ff696-kube-proxy\") pod \"kube-proxy-glbnp\" (UID: \"0298c754-9c20-445e-a53e-ffa3a30ff696\") " pod="kube-system/kube-proxy-glbnp" Nov 1 00:43:23.625923 kubelet[2193]: I1101 00:43:23.625652 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0298c754-9c20-445e-a53e-ffa3a30ff696-xtables-lock\") pod \"kube-proxy-glbnp\" (UID: \"0298c754-9c20-445e-a53e-ffa3a30ff696\") " pod="kube-system/kube-proxy-glbnp" Nov 1 00:43:24.431015 kubelet[2193]: I1101 00:43:24.430968 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tvfp\" (UniqueName: \"kubernetes.io/projected/65491bb9-5c05-4995-aab4-89157534cb6a-kube-api-access-5tvfp\") pod \"tigera-operator-7dcd859c48-v96kx\" (UID: \"65491bb9-5c05-4995-aab4-89157534cb6a\") " pod="tigera-operator/tigera-operator-7dcd859c48-v96kx" Nov 1 00:43:24.431795 kubelet[2193]: I1101 00:43:24.431738 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/65491bb9-5c05-4995-aab4-89157534cb6a-var-lib-calico\") pod \"tigera-operator-7dcd859c48-v96kx\" (UID: \"65491bb9-5c05-4995-aab4-89157534cb6a\") " pod="tigera-operator/tigera-operator-7dcd859c48-v96kx" Nov 1 00:43:24.540819 kubelet[2193]: I1101 00:43:24.540771 2193 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 00:43:24.623204 env[1301]: time="2025-11-01T00:43:24.623124121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-v96kx,Uid:65491bb9-5c05-4995-aab4-89157534cb6a,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:43:24.655794 env[1301]: time="2025-11-01T00:43:24.655702384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:24.656076 env[1301]: time="2025-11-01T00:43:24.655759365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:24.656076 env[1301]: time="2025-11-01T00:43:24.655778481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:24.656351 env[1301]: time="2025-11-01T00:43:24.656078615Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d329a0180d45cd08adae736524cb22fbabb759d8c6cdc1c579df6b2152fb7b3 pid=2257 runtime=io.containerd.runc.v2 Nov 1 00:43:24.689466 systemd[1]: run-containerd-runc-k8s.io-8d329a0180d45cd08adae736524cb22fbabb759d8c6cdc1c579df6b2152fb7b3-runc.myMRib.mount: Deactivated successfully. Nov 1 00:43:24.727457 kubelet[2193]: E1101 00:43:24.726836 2193 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:43:24.727457 kubelet[2193]: E1101 00:43:24.726988 2193 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0298c754-9c20-445e-a53e-ffa3a30ff696-kube-proxy podName:0298c754-9c20-445e-a53e-ffa3a30ff696 nodeName:}" failed. No retries permitted until 2025-11-01 00:43:25.22694666 +0000 UTC m=+8.174048042 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/0298c754-9c20-445e-a53e-ffa3a30ff696-kube-proxy") pod "kube-proxy-glbnp" (UID: "0298c754-9c20-445e-a53e-ffa3a30ff696") : failed to sync configmap cache: timed out waiting for the condition Nov 1 00:43:24.735340 kubelet[2193]: E1101 00:43:24.735279 2193 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:43:24.735340 kubelet[2193]: E1101 00:43:24.735335 2193 projected.go:194] Error preparing data for projected volume kube-api-access-wwrzq for pod kube-system/kube-proxy-glbnp: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:43:24.735646 kubelet[2193]: E1101 00:43:24.735436 2193 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0298c754-9c20-445e-a53e-ffa3a30ff696-kube-api-access-wwrzq podName:0298c754-9c20-445e-a53e-ffa3a30ff696 nodeName:}" failed. No retries permitted until 2025-11-01 00:43:25.23540872 +0000 UTC m=+8.182510093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wwrzq" (UniqueName: "kubernetes.io/projected/0298c754-9c20-445e-a53e-ffa3a30ff696-kube-api-access-wwrzq") pod "kube-proxy-glbnp" (UID: "0298c754-9c20-445e-a53e-ffa3a30ff696") : failed to sync configmap cache: timed out waiting for the condition Nov 1 00:43:24.765363 env[1301]: time="2025-11-01T00:43:24.765309217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-v96kx,Uid:65491bb9-5c05-4995-aab4-89157534cb6a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8d329a0180d45cd08adae736524cb22fbabb759d8c6cdc1c579df6b2152fb7b3\"" Nov 1 00:43:24.771716 env[1301]: time="2025-11-01T00:43:24.771666532Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:43:25.370732 env[1301]: time="2025-11-01T00:43:25.370661290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-glbnp,Uid:0298c754-9c20-445e-a53e-ffa3a30ff696,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:25.393807 env[1301]: time="2025-11-01T00:43:25.393623180Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:25.393807 env[1301]: time="2025-11-01T00:43:25.393683674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:25.393807 env[1301]: time="2025-11-01T00:43:25.393702096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:25.394910 env[1301]: time="2025-11-01T00:43:25.394501178Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4043b3187cb7d7acb544c48caff621308782fe0ba1c3a40cd9ee7e1c6fd1d706 pid=2304 runtime=io.containerd.runc.v2 Nov 1 00:43:25.480897 env[1301]: time="2025-11-01T00:43:25.480794732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-glbnp,Uid:0298c754-9c20-445e-a53e-ffa3a30ff696,Namespace:kube-system,Attempt:0,} returns sandbox id \"4043b3187cb7d7acb544c48caff621308782fe0ba1c3a40cd9ee7e1c6fd1d706\"" Nov 1 00:43:25.499776 env[1301]: time="2025-11-01T00:43:25.491871471Z" level=info msg="CreateContainer within sandbox \"4043b3187cb7d7acb544c48caff621308782fe0ba1c3a40cd9ee7e1c6fd1d706\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:43:26.417884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount171590520.mount: Deactivated successfully. Nov 1 00:43:26.433883 env[1301]: time="2025-11-01T00:43:26.433795119Z" level=info msg="CreateContainer within sandbox \"4043b3187cb7d7acb544c48caff621308782fe0ba1c3a40cd9ee7e1c6fd1d706\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2dd6041f4e51c5e7e68769a7909d8130f77090d6361b9da43d1d352d4316e004\"" Nov 1 00:43:26.438323 env[1301]: time="2025-11-01T00:43:26.438279263Z" level=info msg="StartContainer for \"2dd6041f4e51c5e7e68769a7909d8130f77090d6361b9da43d1d352d4316e004\"" Nov 1 00:43:26.525982 env[1301]: time="2025-11-01T00:43:26.525921474Z" level=info msg="StartContainer for \"2dd6041f4e51c5e7e68769a7909d8130f77090d6361b9da43d1d352d4316e004\" returns successfully" Nov 1 00:43:26.693000 audit[2403]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:26.700414 kernel: kauditd_printk_skb: 4 callbacks suppressed Nov 1 00:43:26.700567 kernel: audit: type=1325 audit(1761957806.693:215): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:26.693000 audit[2403]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0260d290 a2=0 a3=7ffc0260d27c items=0 ppid=2355 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:26.693000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:43:26.764826 kernel: audit: type=1300 audit(1761957806.693:215): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0260d290 a2=0 a3=7ffc0260d27c items=0 ppid=2355 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:26.764961 kernel: audit: type=1327 audit(1761957806.693:215): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:43:26.784346 kernel: audit: type=1325 audit(1761957806.693:216): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2404 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:26.693000 audit[2404]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2404 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:26.693000 audit[2404]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd83935bc0 a2=0 a3=7ffd83935bac items=0 ppid=2355 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:26.819595 kernel: audit: type=1300 audit(1761957806.693:216): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd83935bc0 a2=0 a3=7ffd83935bac items=0 ppid=2355 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:26.693000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:43:26.836209 kernel: audit: type=1327 audit(1761957806.693:216): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:43:26.698000 audit[2407]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2407 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:26.852405 kernel: audit: type=1325 audit(1761957806.698:217): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2407 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:26.698000 audit[2407]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe76527c00 a2=0 a3=7ffe76527bec items=0 ppid=2355 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:26.886372 kernel: audit: type=1300 audit(1761957806.698:217): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe76527c00 a2=0 a3=7ffe76527bec items=0 ppid=2355 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:26.698000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 00:43:26.903225 kernel: audit: type=1327 audit(1761957806.698:217): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 00:43:26.699000 audit[2408]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2408 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:26.921232 kernel: audit: type=1325 audit(1761957806.699:218): table=filter:41 family=10 entries=1 op=nft_register_chain pid=2408 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:26.699000 audit[2408]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe05476b60 a2=0 a3=7ffe05476b4c items=0 ppid=2355 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:26.699000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 00:43:26.731000 audit[2406]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:26.731000 audit[2406]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0fa97f70 a2=0 a3=7fff0fa97f5c items=0 ppid=2355 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:26.731000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 00:43:26.741000 audit[2409]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2409 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:26.741000 audit[2409]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe94e8eb70 a2=0 a3=7ffe94e8eb5c items=0 ppid=2355 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:26.741000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 00:43:26.818000 audit[2410]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2410 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:26.818000 audit[2410]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc72916690 a2=0 a3=7ffc7291667c items=0 ppid=2355 pid=2410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:26.818000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Nov 1 00:43:26.851000 audit[2412]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2412 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:26.851000 audit[2412]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcceef1e50 a2=0 a3=7ffcceef1e3c items=0 ppid=2355 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:26.851000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Nov 1 00:43:26.936000 audit[2415]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2415 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:26.936000 audit[2415]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc5ecdb4e0 a2=0 a3=7ffc5ecdb4cc items=0 ppid=2355 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:26.936000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Nov 1 00:43:26.976000 audit[2416]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2416 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:26.976000 audit[2416]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb78cf390 a2=0 a3=7ffcb78cf37c items=0 ppid=2355 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:26.976000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Nov 1 00:43:27.002000 audit[2418]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2418 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:27.002000 audit[2418]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcbae61750 a2=0 a3=7ffcbae6173c items=0 ppid=2355 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.002000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Nov 1 00:43:27.024000 audit[2419]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2419 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:27.024000 audit[2419]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4b76a580 a2=0 a3=7ffd4b76a56c items=0 ppid=2355 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.024000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Nov 1 00:43:27.052000 audit[2421]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2421 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:27.052000 audit[2421]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd6f775d90 a2=0 a3=7ffd6f775d7c items=0 ppid=2355 pid=2421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.052000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Nov 1 00:43:27.062417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2355941416.mount: Deactivated successfully. Nov 1 00:43:27.066000 audit[2424]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2424 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:27.066000 audit[2424]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe61788920 a2=0 a3=7ffe6178890c items=0 ppid=2355 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.066000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Nov 1 00:43:27.069000 audit[2425]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2425 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:27.069000 audit[2425]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0cd31e40 a2=0 a3=7ffe0cd31e2c items=0 ppid=2355 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.069000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Nov 1 00:43:27.077000 audit[2427]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2427 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:27.077000 audit[2427]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe09e285c0 a2=0 a3=7ffe09e285ac items=0 ppid=2355 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.077000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Nov 1 00:43:27.081000 audit[2428]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:27.081000 audit[2428]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff4a7f41b0 a2=0 a3=7fff4a7f419c items=0 ppid=2355 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.081000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Nov 1 00:43:27.091000 audit[2430]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:27.091000 audit[2430]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdd03736d0 a2=0 a3=7ffdd03736bc items=0 ppid=2355 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.091000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 00:43:27.099000 audit[2433]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2433 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:27.099000 audit[2433]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffee46e1d30 a2=0 a3=7ffee46e1d1c items=0 ppid=2355 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.099000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 00:43:27.110000 audit[2436]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:27.110000 audit[2436]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffe6bf1cb0 a2=0 a3=7fffe6bf1c9c items=0 ppid=2355 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.110000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Nov 1 00:43:27.112000 audit[2437]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2437 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:27.112000 audit[2437]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeb04b85a0 a2=0 a3=7ffeb04b858c items=0 ppid=2355 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.112000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Nov 1 00:43:27.118000 audit[2439]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2439 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:27.118000 audit[2439]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffdbdeae4a0 a2=0 a3=7ffdbdeae48c items=0 ppid=2355 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.118000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:43:27.125000 audit[2442]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2442 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:27.125000 audit[2442]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffde2a05620 a2=0 a3=7ffde2a0560c items=0 ppid=2355 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.125000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:43:27.127000 audit[2443]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2443 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:27.127000 audit[2443]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce101c960 a2=0 a3=7ffce101c94c items=0 ppid=2355 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.127000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Nov 1 00:43:27.131000 audit[2445]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:43:27.131000 audit[2445]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe57d3b3e0 a2=0 a3=7ffe57d3b3cc items=0 ppid=2355 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.131000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Nov 1 00:43:27.177000 audit[2451]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2451 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:27.177000 audit[2451]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc8e17d0a0 a2=0 a3=7ffc8e17d08c items=0 ppid=2355 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.177000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:27.192000 audit[2451]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2451 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:27.192000 audit[2451]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc8e17d0a0 a2=0 a3=7ffc8e17d08c items=0 ppid=2355 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.192000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:27.195000 audit[2456]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:27.195000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffeed2f57c0 a2=0 a3=7ffeed2f57ac items=0 ppid=2355 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.195000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Nov 1 00:43:27.200000 audit[2458]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2458 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:27.200000 audit[2458]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc1db19b70 a2=0 a3=7ffc1db19b5c items=0 ppid=2355 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.200000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Nov 1 00:43:27.208000 audit[2461]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2461 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:27.208000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffcac35f040 a2=0 a3=7ffcac35f02c items=0 ppid=2355 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.208000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Nov 1 00:43:27.210000 audit[2462]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:27.210000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5bf49660 a2=0 a3=7fff5bf4964c items=0 ppid=2355 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.210000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Nov 1 00:43:27.216000 audit[2464]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2464 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:27.216000 audit[2464]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffbca1c290 a2=0 a3=7fffbca1c27c items=0 ppid=2355 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.216000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Nov 1 00:43:27.218000 audit[2465]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2465 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:27.218000 audit[2465]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe52d1aeb0 a2=0 a3=7ffe52d1ae9c items=0 ppid=2355 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.218000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Nov 1 00:43:27.223000 audit[2467]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2467 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:27.223000 audit[2467]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe35e96c80 a2=0 a3=7ffe35e96c6c items=0 ppid=2355 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.223000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Nov 1 00:43:27.230000 audit[2470]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2470 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:27.230000 audit[2470]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff3e6da070 a2=0 a3=7fff3e6da05c items=0 ppid=2355 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.230000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Nov 1 00:43:27.232000 audit[2471]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:27.232000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd3b12d60 a2=0 a3=7fffd3b12d4c items=0 ppid=2355 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.232000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Nov 1 00:43:27.236000 audit[2473]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2473 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:27.236000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc25df78b0 a2=0 a3=7ffc25df789c items=0 ppid=2355 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.236000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Nov 1 00:43:27.238000 audit[2474]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:27.238000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdcea92af0 a2=0 a3=7ffdcea92adc items=0 ppid=2355 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.238000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Nov 1 00:43:27.244000 audit[2476]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2476 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:27.244000 audit[2476]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffea8100ea0 a2=0 a3=7ffea8100e8c items=0 ppid=2355 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.244000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 00:43:27.251000 audit[2479]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2479 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:27.251000 audit[2479]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffef27b3140 a2=0 a3=7ffef27b312c items=0 ppid=2355 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.251000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Nov 1 00:43:27.258000 audit[2482]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2482 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:27.258000 audit[2482]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcfce3d700 a2=0 a3=7ffcfce3d6ec items=0 ppid=2355 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.258000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Nov 1 00:43:27.260000 audit[2483]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:27.260000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc4021ffe0 a2=0 a3=7ffc4021ffcc items=0 ppid=2355 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.260000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Nov 1 00:43:27.265000 audit[2485]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2485 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:27.265000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff8779e6f0 a2=0 a3=7fff8779e6dc items=0 ppid=2355 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.265000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:43:27.272000 audit[2488]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:27.272000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc0a381d20 a2=0 a3=7ffc0a381d0c items=0 ppid=2355 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.272000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:43:27.274000 audit[2489]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:27.274000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4c978d60 a2=0 a3=7ffc4c978d4c items=0 ppid=2355 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.274000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Nov 1 00:43:27.279000 audit[2491]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:27.279000 audit[2491]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffcfb23b190 a2=0 a3=7ffcfb23b17c items=0 ppid=2355 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.279000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Nov 1 00:43:27.282000 audit[2492]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:27.282000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff252a4ee0 a2=0 a3=7fff252a4ecc items=0 ppid=2355 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.282000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Nov 1 00:43:27.288000 audit[2494]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2494 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:27.288000 audit[2494]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff0e4f3ea0 a2=0 a3=7fff0e4f3e8c items=0 ppid=2355 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.288000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:43:27.295000 audit[2497]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2497 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:27.295000 audit[2497]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffff282590 a2=0 a3=7fffff28257c items=0 ppid=2355 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.295000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:43:27.303000 audit[2499]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 00:43:27.303000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc901c7040 a2=0 a3=7ffc901c702c items=0 ppid=2355 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.303000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:27.304000 audit[2499]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2499 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 00:43:27.304000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc901c7040 a2=0 a3=7ffc901c702c items=0 ppid=2355 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:27.304000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:27.488899 kubelet[2193]: I1101 00:43:27.488656 2193 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-glbnp" podStartSLOduration=4.488626929 podStartE2EDuration="4.488626929s" podCreationTimestamp="2025-11-01 00:43:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:43:27.488492979 +0000 UTC m=+10.435594361" watchObservedRunningTime="2025-11-01 00:43:27.488626929 +0000 UTC m=+10.435728314" Nov 1 00:43:28.347585 env[1301]: time="2025-11-01T00:43:28.347502979Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:28.350572 env[1301]: time="2025-11-01T00:43:28.350516801Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:28.353136 env[1301]: time="2025-11-01T00:43:28.353067927Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:28.355961 env[1301]: time="2025-11-01T00:43:28.355917282Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:28.356575 env[1301]: time="2025-11-01T00:43:28.356513155Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 00:43:28.364662 env[1301]: time="2025-11-01T00:43:28.364592035Z" level=info msg="CreateContainer within sandbox \"8d329a0180d45cd08adae736524cb22fbabb759d8c6cdc1c579df6b2152fb7b3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:43:28.388065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2138466957.mount: Deactivated successfully. Nov 1 00:43:28.395576 env[1301]: time="2025-11-01T00:43:28.395509425Z" level=info msg="CreateContainer within sandbox \"8d329a0180d45cd08adae736524cb22fbabb759d8c6cdc1c579df6b2152fb7b3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e5c7e0ed483c648732cc512470269b62498eac5a14b0ce90be31e77783559930\"" Nov 1 00:43:28.399736 env[1301]: time="2025-11-01T00:43:28.397122446Z" level=info msg="StartContainer for \"e5c7e0ed483c648732cc512470269b62498eac5a14b0ce90be31e77783559930\"" Nov 1 00:43:28.507203 env[1301]: time="2025-11-01T00:43:28.498089373Z" level=info msg="StartContainer for \"e5c7e0ed483c648732cc512470269b62498eac5a14b0ce90be31e77783559930\" returns successfully" Nov 1 00:43:29.379150 systemd[1]: run-containerd-runc-k8s.io-e5c7e0ed483c648732cc512470269b62498eac5a14b0ce90be31e77783559930-runc.kzfWbh.mount: Deactivated successfully. Nov 1 00:43:36.024795 sudo[1561]: pam_unix(sudo:session): session closed for user root Nov 1 00:43:36.056878 kernel: kauditd_printk_skb: 143 callbacks suppressed Nov 1 00:43:36.057127 kernel: audit: type=1106 audit(1761957816.023:266): pid=1561 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:43:36.023000 audit[1561]: USER_END pid=1561 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:43:36.023000 audit[1561]: CRED_DISP pid=1561 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:43:36.100336 kernel: audit: type=1104 audit(1761957816.023:267): pid=1561 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:43:36.101358 sshd[1557]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:36.106670 systemd[1]: sshd@6-10.128.0.16:22-139.178.68.195:55228.service: Deactivated successfully. Nov 1 00:43:36.108873 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:43:36.110046 systemd-logind[1286]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:43:36.112093 systemd-logind[1286]: Removed session 7. Nov 1 00:43:36.101000 audit[1557]: USER_END pid=1557 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:43:36.152201 kernel: audit: type=1106 audit(1761957816.101:268): pid=1557 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:43:36.102000 audit[1557]: CRED_DISP pid=1557 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:43:36.217280 kernel: audit: type=1104 audit(1761957816.102:269): pid=1557 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:43:36.217466 kernel: audit: type=1131 audit(1761957816.105:270): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.16:22-139.178.68.195:55228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:36.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.16:22-139.178.68.195:55228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:37.184000 audit[2582]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:37.202225 kernel: audit: type=1325 audit(1761957817.184:271): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:37.184000 audit[2582]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffee00fb2a0 a2=0 a3=7ffee00fb28c items=0 ppid=2355 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:37.247213 kernel: audit: type=1300 audit(1761957817.184:271): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffee00fb2a0 a2=0 a3=7ffee00fb28c items=0 ppid=2355 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:37.184000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:37.295229 kernel: audit: type=1327 audit(1761957817.184:271): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:37.211000 audit[2582]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:37.320209 kernel: audit: type=1325 audit(1761957817.211:272): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:37.211000 audit[2582]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffee00fb2a0 a2=0 a3=0 items=0 ppid=2355 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:37.368236 kernel: audit: type=1300 audit(1761957817.211:272): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffee00fb2a0 a2=0 a3=0 items=0 ppid=2355 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:37.211000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:37.275000 audit[2584]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2584 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:37.275000 audit[2584]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe0b28f6c0 a2=0 a3=7ffe0b28f6ac items=0 ppid=2355 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:37.275000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:37.296000 audit[2584]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2584 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:37.296000 audit[2584]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe0b28f6c0 a2=0 a3=0 items=0 ppid=2355 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:37.296000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:42.567000 audit[2586]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2586 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:42.574700 kernel: kauditd_printk_skb: 7 callbacks suppressed Nov 1 00:43:42.574848 kernel: audit: type=1325 audit(1761957822.567:275): table=filter:93 family=2 entries=17 op=nft_register_rule pid=2586 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:42.567000 audit[2586]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe60207870 a2=0 a3=7ffe6020785c items=0 ppid=2355 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:42.631245 kernel: audit: type=1300 audit(1761957822.567:275): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe60207870 a2=0 a3=7ffe6020785c items=0 ppid=2355 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:42.631440 kernel: audit: type=1327 audit(1761957822.567:275): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:42.567000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:42.605000 audit[2586]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2586 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:42.657204 kernel: audit: type=1325 audit(1761957822.605:276): table=nat:94 family=2 entries=12 op=nft_register_rule pid=2586 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:42.605000 audit[2586]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe60207870 a2=0 a3=0 items=0 ppid=2355 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:42.698207 kernel: audit: type=1300 audit(1761957822.605:276): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe60207870 a2=0 a3=0 items=0 ppid=2355 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:42.605000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:42.718206 kernel: audit: type=1327 audit(1761957822.605:276): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:42.719000 audit[2588]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2588 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:42.737203 kernel: audit: type=1325 audit(1761957822.719:277): table=filter:95 family=2 entries=18 op=nft_register_rule pid=2588 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:42.719000 audit[2588]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff75c529b0 a2=0 a3=7fff75c5299c items=0 ppid=2355 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:42.772213 kernel: audit: type=1300 audit(1761957822.719:277): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7fff75c529b0 a2=0 a3=7fff75c5299c items=0 ppid=2355 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:42.719000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:42.791210 kernel: audit: type=1327 audit(1761957822.719:277): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:42.773000 audit[2588]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2588 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:42.808206 kernel: audit: type=1325 audit(1761957822.773:278): table=nat:96 family=2 entries=12 op=nft_register_rule pid=2588 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:42.773000 audit[2588]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff75c529b0 a2=0 a3=0 items=0 ppid=2355 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:42.773000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:43.905000 audit[2590]: NETFILTER_CFG table=filter:97 family=2 entries=19 op=nft_register_rule pid=2590 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:43.905000 audit[2590]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffcce3d270 a2=0 a3=7fffcce3d25c items=0 ppid=2355 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:43.905000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:43.912000 audit[2590]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2590 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:43.912000 audit[2590]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffcce3d270 a2=0 a3=0 items=0 ppid=2355 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:43.912000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:45.143000 audit[2592]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=2592 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:45.143000 audit[2592]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff540dd3b0 a2=0 a3=7fff540dd39c items=0 ppid=2355 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:45.143000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:45.148000 audit[2592]: NETFILTER_CFG table=nat:100 family=2 entries=12 op=nft_register_rule pid=2592 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:45.148000 audit[2592]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff540dd3b0 a2=0 a3=0 items=0 ppid=2355 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:45.148000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:45.188603 kubelet[2193]: I1101 00:43:45.188487 2193 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-v96kx" podStartSLOduration=17.597373739 podStartE2EDuration="21.188423398s" podCreationTimestamp="2025-11-01 00:43:24 +0000 UTC" firstStartedPulling="2025-11-01 00:43:24.767791861 +0000 UTC m=+7.714893236" lastFinishedPulling="2025-11-01 00:43:28.358841535 +0000 UTC m=+11.305942895" observedRunningTime="2025-11-01 00:43:29.481671723 +0000 UTC m=+12.428773105" watchObservedRunningTime="2025-11-01 00:43:45.188423398 +0000 UTC m=+28.135524780" Nov 1 00:43:45.336644 kubelet[2193]: I1101 00:43:45.336539 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b2f503a-5234-4013-a858-f25288de5911-tigera-ca-bundle\") pod \"calico-typha-854dd8c5dd-llvvc\" (UID: \"0b2f503a-5234-4013-a858-f25288de5911\") " pod="calico-system/calico-typha-854dd8c5dd-llvvc" Nov 1 00:43:45.336644 kubelet[2193]: I1101 00:43:45.336645 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpxzh\" (UniqueName: \"kubernetes.io/projected/0b2f503a-5234-4013-a858-f25288de5911-kube-api-access-gpxzh\") pod \"calico-typha-854dd8c5dd-llvvc\" (UID: \"0b2f503a-5234-4013-a858-f25288de5911\") " pod="calico-system/calico-typha-854dd8c5dd-llvvc" Nov 1 00:43:45.336998 kubelet[2193]: I1101 00:43:45.336684 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0b2f503a-5234-4013-a858-f25288de5911-typha-certs\") pod \"calico-typha-854dd8c5dd-llvvc\" (UID: \"0b2f503a-5234-4013-a858-f25288de5911\") " pod="calico-system/calico-typha-854dd8c5dd-llvvc" Nov 1 00:43:45.500147 env[1301]: time="2025-11-01T00:43:45.499951819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-854dd8c5dd-llvvc,Uid:0b2f503a-5234-4013-a858-f25288de5911,Namespace:calico-system,Attempt:0,}" Nov 1 00:43:45.538456 kubelet[2193]: I1101 00:43:45.538318 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpblr\" (UniqueName: \"kubernetes.io/projected/738a4efd-af41-4477-8cea-60e2fb6dd5ba-kube-api-access-xpblr\") pod \"calico-node-2jntr\" (UID: \"738a4efd-af41-4477-8cea-60e2fb6dd5ba\") " pod="calico-system/calico-node-2jntr" Nov 1 00:43:45.539592 kubelet[2193]: I1101 00:43:45.538403 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/738a4efd-af41-4477-8cea-60e2fb6dd5ba-node-certs\") pod \"calico-node-2jntr\" (UID: \"738a4efd-af41-4477-8cea-60e2fb6dd5ba\") " pod="calico-system/calico-node-2jntr" Nov 1 00:43:45.539592 kubelet[2193]: I1101 00:43:45.538931 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/738a4efd-af41-4477-8cea-60e2fb6dd5ba-flexvol-driver-host\") pod \"calico-node-2jntr\" (UID: \"738a4efd-af41-4477-8cea-60e2fb6dd5ba\") " pod="calico-system/calico-node-2jntr" Nov 1 00:43:45.539592 kubelet[2193]: I1101 00:43:45.539093 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/738a4efd-af41-4477-8cea-60e2fb6dd5ba-lib-modules\") pod \"calico-node-2jntr\" (UID: \"738a4efd-af41-4477-8cea-60e2fb6dd5ba\") " pod="calico-system/calico-node-2jntr" Nov 1 00:43:45.539592 kubelet[2193]: I1101 00:43:45.539120 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/738a4efd-af41-4477-8cea-60e2fb6dd5ba-policysync\") pod \"calico-node-2jntr\" (UID: \"738a4efd-af41-4477-8cea-60e2fb6dd5ba\") " pod="calico-system/calico-node-2jntr" Nov 1 00:43:45.539592 kubelet[2193]: I1101 00:43:45.539146 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/738a4efd-af41-4477-8cea-60e2fb6dd5ba-var-lib-calico\") pod \"calico-node-2jntr\" (UID: \"738a4efd-af41-4477-8cea-60e2fb6dd5ba\") " pod="calico-system/calico-node-2jntr" Nov 1 00:43:45.540268 kubelet[2193]: I1101 00:43:45.539233 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/738a4efd-af41-4477-8cea-60e2fb6dd5ba-cni-log-dir\") pod \"calico-node-2jntr\" (UID: \"738a4efd-af41-4477-8cea-60e2fb6dd5ba\") " pod="calico-system/calico-node-2jntr" Nov 1 00:43:45.540268 kubelet[2193]: I1101 00:43:45.539261 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/738a4efd-af41-4477-8cea-60e2fb6dd5ba-xtables-lock\") pod \"calico-node-2jntr\" (UID: \"738a4efd-af41-4477-8cea-60e2fb6dd5ba\") " pod="calico-system/calico-node-2jntr" Nov 1 00:43:45.540268 kubelet[2193]: I1101 00:43:45.539349 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/738a4efd-af41-4477-8cea-60e2fb6dd5ba-var-run-calico\") pod \"calico-node-2jntr\" (UID: \"738a4efd-af41-4477-8cea-60e2fb6dd5ba\") " pod="calico-system/calico-node-2jntr" Nov 1 00:43:45.540268 kubelet[2193]: I1101 00:43:45.539384 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/738a4efd-af41-4477-8cea-60e2fb6dd5ba-cni-net-dir\") pod \"calico-node-2jntr\" (UID: \"738a4efd-af41-4477-8cea-60e2fb6dd5ba\") " pod="calico-system/calico-node-2jntr" Nov 1 00:43:45.540268 kubelet[2193]: I1101 00:43:45.539412 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/738a4efd-af41-4477-8cea-60e2fb6dd5ba-cni-bin-dir\") pod \"calico-node-2jntr\" (UID: \"738a4efd-af41-4477-8cea-60e2fb6dd5ba\") " pod="calico-system/calico-node-2jntr" Nov 1 00:43:45.540598 kubelet[2193]: I1101 00:43:45.539440 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/738a4efd-af41-4477-8cea-60e2fb6dd5ba-tigera-ca-bundle\") pod \"calico-node-2jntr\" (UID: \"738a4efd-af41-4477-8cea-60e2fb6dd5ba\") " pod="calico-system/calico-node-2jntr" Nov 1 00:43:45.540718 env[1301]: time="2025-11-01T00:43:45.539245224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:45.540718 env[1301]: time="2025-11-01T00:43:45.539329066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:45.540718 env[1301]: time="2025-11-01T00:43:45.539350409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:45.541326 env[1301]: time="2025-11-01T00:43:45.541090748Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/74e138743dfe1c4daa5d183eed8f2158b78ef4090cc15d1fd7281a2976ef68a6 pid=2603 runtime=io.containerd.runc.v2 Nov 1 00:43:45.666045 kubelet[2193]: E1101 00:43:45.665955 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fvm9x" podUID="1db94968-800e-4bd7-88c1-2551a090e4ab" Nov 1 00:43:45.670672 kubelet[2193]: E1101 00:43:45.670612 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.670672 kubelet[2193]: W1101 00:43:45.670661 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.670922 kubelet[2193]: E1101 00:43:45.670734 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.688448 kubelet[2193]: E1101 00:43:45.688395 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.688448 kubelet[2193]: W1101 00:43:45.688443 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.688771 kubelet[2193]: E1101 00:43:45.688483 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.696383 kubelet[2193]: E1101 00:43:45.696316 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.696383 kubelet[2193]: W1101 00:43:45.696360 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.696735 kubelet[2193]: E1101 00:43:45.696398 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.698372 kubelet[2193]: E1101 00:43:45.698330 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.698372 kubelet[2193]: W1101 00:43:45.698368 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.698620 kubelet[2193]: E1101 00:43:45.698402 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.698831 kubelet[2193]: E1101 00:43:45.698803 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.698831 kubelet[2193]: W1101 00:43:45.698831 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.699007 kubelet[2193]: E1101 00:43:45.698854 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.699306 kubelet[2193]: E1101 00:43:45.699280 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.699306 kubelet[2193]: W1101 00:43:45.699306 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.699495 kubelet[2193]: E1101 00:43:45.699328 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.703452 kubelet[2193]: E1101 00:43:45.703422 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.703452 kubelet[2193]: W1101 00:43:45.703451 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.703654 kubelet[2193]: E1101 00:43:45.703476 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.707417 kubelet[2193]: E1101 00:43:45.707381 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.707417 kubelet[2193]: W1101 00:43:45.707416 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.707620 kubelet[2193]: E1101 00:43:45.707442 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.708299 kubelet[2193]: E1101 00:43:45.707927 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.708299 kubelet[2193]: W1101 00:43:45.707950 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.708299 kubelet[2193]: E1101 00:43:45.707976 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.708648 kubelet[2193]: E1101 00:43:45.708627 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.708934 kubelet[2193]: W1101 00:43:45.708748 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.708934 kubelet[2193]: E1101 00:43:45.708777 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.709331 kubelet[2193]: E1101 00:43:45.709310 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.709477 kubelet[2193]: W1101 00:43:45.709454 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.709604 kubelet[2193]: E1101 00:43:45.709583 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.710027 kubelet[2193]: E1101 00:43:45.710005 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.710209 kubelet[2193]: W1101 00:43:45.710155 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.710354 kubelet[2193]: E1101 00:43:45.710332 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.710775 kubelet[2193]: E1101 00:43:45.710754 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.710925 kubelet[2193]: W1101 00:43:45.710901 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.711054 kubelet[2193]: E1101 00:43:45.711034 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.711519 kubelet[2193]: E1101 00:43:45.711497 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.711652 kubelet[2193]: W1101 00:43:45.711629 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.711780 kubelet[2193]: E1101 00:43:45.711758 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.712222 kubelet[2193]: E1101 00:43:45.712200 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.712387 kubelet[2193]: W1101 00:43:45.712363 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.712609 kubelet[2193]: E1101 00:43:45.712585 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.713079 kubelet[2193]: E1101 00:43:45.713058 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.713271 kubelet[2193]: W1101 00:43:45.713234 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.713404 kubelet[2193]: E1101 00:43:45.713382 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.713819 kubelet[2193]: E1101 00:43:45.713799 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.713960 kubelet[2193]: W1101 00:43:45.713939 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.716029 kubelet[2193]: E1101 00:43:45.715993 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.717763 kubelet[2193]: E1101 00:43:45.717738 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.718023 kubelet[2193]: W1101 00:43:45.717995 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.718237 kubelet[2193]: E1101 00:43:45.718212 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.718991 kubelet[2193]: E1101 00:43:45.718970 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.719195 kubelet[2193]: W1101 00:43:45.719152 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.719369 kubelet[2193]: E1101 00:43:45.719347 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.719896 kubelet[2193]: E1101 00:43:45.719875 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.720073 kubelet[2193]: W1101 00:43:45.720051 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.720236 kubelet[2193]: E1101 00:43:45.720214 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.720839 kubelet[2193]: E1101 00:43:45.720815 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.720986 kubelet[2193]: W1101 00:43:45.720964 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.721147 kubelet[2193]: E1101 00:43:45.721125 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.722326 kubelet[2193]: E1101 00:43:45.722307 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.722543 kubelet[2193]: W1101 00:43:45.722517 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.725613 kubelet[2193]: E1101 00:43:45.725572 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.727304 env[1301]: time="2025-11-01T00:43:45.727236375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2jntr,Uid:738a4efd-af41-4477-8cea-60e2fb6dd5ba,Namespace:calico-system,Attempt:0,}" Nov 1 00:43:45.743430 kubelet[2193]: E1101 00:43:45.742132 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.743430 kubelet[2193]: W1101 00:43:45.742166 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.743430 kubelet[2193]: E1101 00:43:45.742219 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.743430 kubelet[2193]: I1101 00:43:45.742282 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2rjx\" (UniqueName: \"kubernetes.io/projected/1db94968-800e-4bd7-88c1-2551a090e4ab-kube-api-access-x2rjx\") pod \"csi-node-driver-fvm9x\" (UID: \"1db94968-800e-4bd7-88c1-2551a090e4ab\") " pod="calico-system/csi-node-driver-fvm9x" Nov 1 00:43:45.743430 kubelet[2193]: E1101 00:43:45.742717 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.743430 kubelet[2193]: W1101 00:43:45.742736 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.743430 kubelet[2193]: E1101 00:43:45.742761 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.743430 kubelet[2193]: I1101 00:43:45.742789 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1db94968-800e-4bd7-88c1-2551a090e4ab-kubelet-dir\") pod \"csi-node-driver-fvm9x\" (UID: \"1db94968-800e-4bd7-88c1-2551a090e4ab\") " pod="calico-system/csi-node-driver-fvm9x" Nov 1 00:43:45.743430 kubelet[2193]: E1101 00:43:45.743203 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.744094 kubelet[2193]: W1101 00:43:45.743224 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.744094 kubelet[2193]: E1101 00:43:45.743248 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.744094 kubelet[2193]: I1101 00:43:45.743285 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1db94968-800e-4bd7-88c1-2551a090e4ab-registration-dir\") pod \"csi-node-driver-fvm9x\" (UID: \"1db94968-800e-4bd7-88c1-2551a090e4ab\") " pod="calico-system/csi-node-driver-fvm9x" Nov 1 00:43:45.744876 kubelet[2193]: E1101 00:43:45.744586 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.744876 kubelet[2193]: W1101 00:43:45.744608 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.744876 kubelet[2193]: E1101 00:43:45.744766 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.744876 kubelet[2193]: I1101 00:43:45.744807 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1db94968-800e-4bd7-88c1-2551a090e4ab-socket-dir\") pod \"csi-node-driver-fvm9x\" (UID: \"1db94968-800e-4bd7-88c1-2551a090e4ab\") " pod="calico-system/csi-node-driver-fvm9x" Nov 1 00:43:45.745473 kubelet[2193]: E1101 00:43:45.745453 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.745770 kubelet[2193]: W1101 00:43:45.745613 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.745944 kubelet[2193]: E1101 00:43:45.745918 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.746418 kubelet[2193]: E1101 00:43:45.746387 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.746572 kubelet[2193]: W1101 00:43:45.746550 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.746832 kubelet[2193]: E1101 00:43:45.746813 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.747199 kubelet[2193]: E1101 00:43:45.747166 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.747344 kubelet[2193]: W1101 00:43:45.747324 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.747607 kubelet[2193]: E1101 00:43:45.747588 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.747967 kubelet[2193]: E1101 00:43:45.747939 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.748108 kubelet[2193]: W1101 00:43:45.748088 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.748397 kubelet[2193]: E1101 00:43:45.748376 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.748630 kubelet[2193]: I1101 00:43:45.748607 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1db94968-800e-4bd7-88c1-2551a090e4ab-varrun\") pod \"csi-node-driver-fvm9x\" (UID: \"1db94968-800e-4bd7-88c1-2551a090e4ab\") " pod="calico-system/csi-node-driver-fvm9x" Nov 1 00:43:45.749209 kubelet[2193]: E1101 00:43:45.749187 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.749377 kubelet[2193]: W1101 00:43:45.749355 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.751292 kubelet[2193]: E1101 00:43:45.749584 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.751674 kubelet[2193]: E1101 00:43:45.751656 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.753271 kubelet[2193]: W1101 00:43:45.753234 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.753417 kubelet[2193]: E1101 00:43:45.753395 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.753922 kubelet[2193]: E1101 00:43:45.753904 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.754053 kubelet[2193]: W1101 00:43:45.754035 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.754168 kubelet[2193]: E1101 00:43:45.754151 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.754625 kubelet[2193]: E1101 00:43:45.754608 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.754751 kubelet[2193]: W1101 00:43:45.754733 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.754866 kubelet[2193]: E1101 00:43:45.754848 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.755304 kubelet[2193]: E1101 00:43:45.755281 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.755435 kubelet[2193]: W1101 00:43:45.755416 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.755541 kubelet[2193]: E1101 00:43:45.755523 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.755958 kubelet[2193]: E1101 00:43:45.755941 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.756492 kubelet[2193]: W1101 00:43:45.756468 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.756623 kubelet[2193]: E1101 00:43:45.756603 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.757099 kubelet[2193]: E1101 00:43:45.757081 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.757307 kubelet[2193]: W1101 00:43:45.757284 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.757846 kubelet[2193]: E1101 00:43:45.757821 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.836282 env[1301]: time="2025-11-01T00:43:45.835824063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:45.836282 env[1301]: time="2025-11-01T00:43:45.835908566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:45.836282 env[1301]: time="2025-11-01T00:43:45.835933068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:45.837862 env[1301]: time="2025-11-01T00:43:45.836966843Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e866939bc6b8fd56ef3e0ce1b7f23248df9102762d451478cca5e5a58b97ae53 pid=2688 runtime=io.containerd.runc.v2 Nov 1 00:43:45.852808 env[1301]: time="2025-11-01T00:43:45.852740111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-854dd8c5dd-llvvc,Uid:0b2f503a-5234-4013-a858-f25288de5911,Namespace:calico-system,Attempt:0,} returns sandbox id \"74e138743dfe1c4daa5d183eed8f2158b78ef4090cc15d1fd7281a2976ef68a6\"" Nov 1 00:43:45.856706 env[1301]: time="2025-11-01T00:43:45.856641085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:43:45.858951 kubelet[2193]: E1101 00:43:45.858135 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.858951 kubelet[2193]: W1101 00:43:45.858167 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.858951 kubelet[2193]: E1101 00:43:45.858221 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.858951 kubelet[2193]: E1101 00:43:45.858742 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.858951 kubelet[2193]: W1101 00:43:45.858761 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.858951 kubelet[2193]: E1101 00:43:45.858791 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.859810 kubelet[2193]: E1101 00:43:45.859778 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.860189 kubelet[2193]: W1101 00:43:45.859954 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.860189 kubelet[2193]: E1101 00:43:45.859989 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.860763 kubelet[2193]: E1101 00:43:45.860732 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.860910 kubelet[2193]: W1101 00:43:45.860889 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.861224 kubelet[2193]: E1101 00:43:45.861201 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.861796 kubelet[2193]: E1101 00:43:45.861776 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.862034 kubelet[2193]: W1101 00:43:45.861977 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.862307 kubelet[2193]: E1101 00:43:45.862281 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.863122 kubelet[2193]: E1101 00:43:45.863099 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.863343 kubelet[2193]: W1101 00:43:45.863319 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.863673 kubelet[2193]: E1101 00:43:45.863649 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.864087 kubelet[2193]: E1101 00:43:45.864068 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.864285 kubelet[2193]: W1101 00:43:45.864260 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.864571 kubelet[2193]: E1101 00:43:45.864549 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.864965 kubelet[2193]: E1101 00:43:45.864947 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.865102 kubelet[2193]: W1101 00:43:45.865069 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.865401 kubelet[2193]: E1101 00:43:45.865379 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.865824 kubelet[2193]: E1101 00:43:45.865803 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.865957 kubelet[2193]: W1101 00:43:45.865935 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.866230 kubelet[2193]: E1101 00:43:45.866205 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.869119 kubelet[2193]: E1101 00:43:45.869093 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.869349 kubelet[2193]: W1101 00:43:45.869322 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.869623 kubelet[2193]: E1101 00:43:45.869599 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.869977 kubelet[2193]: E1101 00:43:45.869958 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.870127 kubelet[2193]: W1101 00:43:45.870106 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.870439 kubelet[2193]: E1101 00:43:45.870416 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.871143 kubelet[2193]: E1101 00:43:45.871121 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.871318 kubelet[2193]: W1101 00:43:45.871294 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.871603 kubelet[2193]: E1101 00:43:45.871581 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.871939 kubelet[2193]: E1101 00:43:45.871920 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.874386 kubelet[2193]: W1101 00:43:45.874353 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.874691 kubelet[2193]: E1101 00:43:45.874669 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.875069 kubelet[2193]: E1101 00:43:45.875051 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.875346 kubelet[2193]: W1101 00:43:45.875311 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.875639 kubelet[2193]: E1101 00:43:45.875616 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.875984 kubelet[2193]: E1101 00:43:45.875966 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.876101 kubelet[2193]: W1101 00:43:45.876082 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.876374 kubelet[2193]: E1101 00:43:45.876354 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.876765 kubelet[2193]: E1101 00:43:45.876747 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.876937 kubelet[2193]: W1101 00:43:45.876916 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.877213 kubelet[2193]: E1101 00:43:45.877192 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.877735 kubelet[2193]: E1101 00:43:45.877716 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.880086 kubelet[2193]: W1101 00:43:45.880055 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.880583 kubelet[2193]: E1101 00:43:45.880560 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.881848 kubelet[2193]: E1101 00:43:45.881826 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.881983 kubelet[2193]: W1101 00:43:45.881964 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.882380 kubelet[2193]: E1101 00:43:45.882302 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.882833 kubelet[2193]: E1101 00:43:45.882813 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.882961 kubelet[2193]: W1101 00:43:45.882942 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.883323 kubelet[2193]: E1101 00:43:45.883302 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.887427 kubelet[2193]: E1101 00:43:45.887396 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.887642 kubelet[2193]: W1101 00:43:45.887613 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.887995 kubelet[2193]: E1101 00:43:45.887973 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.889535 kubelet[2193]: E1101 00:43:45.889497 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.889535 kubelet[2193]: W1101 00:43:45.889524 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.889814 kubelet[2193]: E1101 00:43:45.889747 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.890007 kubelet[2193]: E1101 00:43:45.889925 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.890007 kubelet[2193]: W1101 00:43:45.889946 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.890146 kubelet[2193]: E1101 00:43:45.890069 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.890394 kubelet[2193]: E1101 00:43:45.890360 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.890394 kubelet[2193]: W1101 00:43:45.890384 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.890571 kubelet[2193]: E1101 00:43:45.890541 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.890859 kubelet[2193]: E1101 00:43:45.890821 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.890859 kubelet[2193]: W1101 00:43:45.890840 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.890987 kubelet[2193]: E1101 00:43:45.890865 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.895675 kubelet[2193]: E1101 00:43:45.895308 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.895675 kubelet[2193]: W1101 00:43:45.895339 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.895675 kubelet[2193]: E1101 00:43:45.895368 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:45.920379 kubelet[2193]: E1101 00:43:45.920338 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:45.920675 kubelet[2193]: W1101 00:43:45.920635 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:45.920910 kubelet[2193]: E1101 00:43:45.920879 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:46.056115 env[1301]: time="2025-11-01T00:43:46.056044265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2jntr,Uid:738a4efd-af41-4477-8cea-60e2fb6dd5ba,Namespace:calico-system,Attempt:0,} returns sandbox id \"e866939bc6b8fd56ef3e0ce1b7f23248df9102762d451478cca5e5a58b97ae53\"" Nov 1 00:43:46.163000 audit[2756]: NETFILTER_CFG table=filter:101 family=2 entries=22 op=nft_register_rule pid=2756 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:46.163000 audit[2756]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffdfda58fd0 a2=0 a3=7ffdfda58fbc items=0 ppid=2355 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:46.163000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:46.170000 audit[2756]: NETFILTER_CFG table=nat:102 family=2 entries=12 op=nft_register_rule pid=2756 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:46.170000 audit[2756]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdfda58fd0 a2=0 a3=0 items=0 ppid=2355 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:46.170000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:47.071228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2854897022.mount: Deactivated successfully. Nov 1 00:43:47.391655 kubelet[2193]: E1101 00:43:47.391150 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fvm9x" podUID="1db94968-800e-4bd7-88c1-2551a090e4ab" Nov 1 00:43:48.328464 env[1301]: time="2025-11-01T00:43:48.328379076Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:48.331573 env[1301]: time="2025-11-01T00:43:48.331517019Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:48.335078 env[1301]: time="2025-11-01T00:43:48.335031878Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:48.337940 env[1301]: time="2025-11-01T00:43:48.337894693Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:48.339088 env[1301]: time="2025-11-01T00:43:48.339042591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 00:43:48.342018 env[1301]: time="2025-11-01T00:43:48.341965272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:43:48.369193 env[1301]: time="2025-11-01T00:43:48.357368183Z" level=info msg="CreateContainer within sandbox \"74e138743dfe1c4daa5d183eed8f2158b78ef4090cc15d1fd7281a2976ef68a6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:43:48.388254 env[1301]: time="2025-11-01T00:43:48.388087689Z" level=info msg="CreateContainer within sandbox \"74e138743dfe1c4daa5d183eed8f2158b78ef4090cc15d1fd7281a2976ef68a6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e582386b885e0e9ccf6f7dbde161e1db1b5bedca7d1700b1a336eba2e6bc8b93\"" Nov 1 00:43:48.392085 env[1301]: time="2025-11-01T00:43:48.389452062Z" level=info msg="StartContainer for \"e582386b885e0e9ccf6f7dbde161e1db1b5bedca7d1700b1a336eba2e6bc8b93\"" Nov 1 00:43:48.523068 env[1301]: time="2025-11-01T00:43:48.522999492Z" level=info msg="StartContainer for \"e582386b885e0e9ccf6f7dbde161e1db1b5bedca7d1700b1a336eba2e6bc8b93\" returns successfully" Nov 1 00:43:49.394108 kubelet[2193]: E1101 00:43:49.394052 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fvm9x" podUID="1db94968-800e-4bd7-88c1-2551a090e4ab" Nov 1 00:43:49.472054 env[1301]: time="2025-11-01T00:43:49.471525915Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:49.476215 env[1301]: time="2025-11-01T00:43:49.476135126Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:49.478302 env[1301]: time="2025-11-01T00:43:49.478257949Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:49.480375 env[1301]: time="2025-11-01T00:43:49.480315976Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:49.481132 env[1301]: time="2025-11-01T00:43:49.481048510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 00:43:49.486488 env[1301]: time="2025-11-01T00:43:49.486443835Z" level=info msg="CreateContainer within sandbox \"e866939bc6b8fd56ef3e0ce1b7f23248df9102762d451478cca5e5a58b97ae53\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:43:49.513132 env[1301]: time="2025-11-01T00:43:49.513015127Z" level=info msg="CreateContainer within sandbox \"e866939bc6b8fd56ef3e0ce1b7f23248df9102762d451478cca5e5a58b97ae53\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c4cc8284c0e0e2c2b0c5cbb01ca437c7c38e242a816cc2a5ac753ed562af21b7\"" Nov 1 00:43:49.517207 env[1301]: time="2025-11-01T00:43:49.513995089Z" level=info msg="StartContainer for \"c4cc8284c0e0e2c2b0c5cbb01ca437c7c38e242a816cc2a5ac753ed562af21b7\"" Nov 1 00:43:49.557101 kubelet[2193]: I1101 00:43:49.556233 2193 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-854dd8c5dd-llvvc" podStartSLOduration=2.070083187 podStartE2EDuration="4.556201429s" podCreationTimestamp="2025-11-01 00:43:45 +0000 UTC" firstStartedPulling="2025-11-01 00:43:45.855616164 +0000 UTC m=+28.802717535" lastFinishedPulling="2025-11-01 00:43:48.341734405 +0000 UTC m=+31.288835777" observedRunningTime="2025-11-01 00:43:49.553825538 +0000 UTC m=+32.500926921" watchObservedRunningTime="2025-11-01 00:43:49.556201429 +0000 UTC m=+32.503302811" Nov 1 00:43:49.557483 kubelet[2193]: E1101 00:43:49.557449 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.557620 kubelet[2193]: W1101 00:43:49.557484 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.557620 kubelet[2193]: E1101 00:43:49.557536 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.562531 kubelet[2193]: E1101 00:43:49.558104 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.562531 kubelet[2193]: W1101 00:43:49.558131 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.562531 kubelet[2193]: E1101 00:43:49.558165 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.562531 kubelet[2193]: E1101 00:43:49.561426 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.562531 kubelet[2193]: W1101 00:43:49.561444 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.562531 kubelet[2193]: E1101 00:43:49.561469 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.562531 kubelet[2193]: E1101 00:43:49.561864 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.562531 kubelet[2193]: W1101 00:43:49.561878 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.562531 kubelet[2193]: E1101 00:43:49.561895 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.562531 kubelet[2193]: E1101 00:43:49.562256 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.563132 kubelet[2193]: W1101 00:43:49.562271 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.563132 kubelet[2193]: E1101 00:43:49.562291 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.564583 kubelet[2193]: E1101 00:43:49.563401 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.564583 kubelet[2193]: W1101 00:43:49.563421 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.564583 kubelet[2193]: E1101 00:43:49.563440 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.564583 kubelet[2193]: E1101 00:43:49.564400 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.564583 kubelet[2193]: W1101 00:43:49.564417 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.564583 kubelet[2193]: E1101 00:43:49.564436 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.565319 kubelet[2193]: E1101 00:43:49.565095 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.565319 kubelet[2193]: W1101 00:43:49.565115 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.565319 kubelet[2193]: E1101 00:43:49.565134 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.565811 kubelet[2193]: E1101 00:43:49.565793 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.565931 kubelet[2193]: W1101 00:43:49.565913 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.566064 kubelet[2193]: E1101 00:43:49.566044 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.566503 kubelet[2193]: E1101 00:43:49.566483 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.566658 kubelet[2193]: W1101 00:43:49.566638 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.566782 kubelet[2193]: E1101 00:43:49.566762 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.567304 kubelet[2193]: E1101 00:43:49.567284 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.567458 kubelet[2193]: W1101 00:43:49.567438 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.567570 kubelet[2193]: E1101 00:43:49.567551 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.568400 kubelet[2193]: E1101 00:43:49.568383 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.568619 kubelet[2193]: W1101 00:43:49.568573 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.568791 kubelet[2193]: E1101 00:43:49.568751 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.569377 kubelet[2193]: E1101 00:43:49.569351 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.569377 kubelet[2193]: W1101 00:43:49.569375 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.569519 kubelet[2193]: E1101 00:43:49.569395 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.569731 kubelet[2193]: E1101 00:43:49.569709 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.569731 kubelet[2193]: W1101 00:43:49.569729 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.569878 kubelet[2193]: E1101 00:43:49.569746 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.570075 kubelet[2193]: E1101 00:43:49.570054 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.570152 kubelet[2193]: W1101 00:43:49.570077 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.570152 kubelet[2193]: E1101 00:43:49.570094 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.587595 systemd[1]: run-containerd-runc-k8s.io-c4cc8284c0e0e2c2b0c5cbb01ca437c7c38e242a816cc2a5ac753ed562af21b7-runc.KwhBb2.mount: Deactivated successfully. Nov 1 00:43:49.596991 kubelet[2193]: E1101 00:43:49.596959 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.596991 kubelet[2193]: W1101 00:43:49.596989 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.597497 kubelet[2193]: E1101 00:43:49.597023 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.597774 kubelet[2193]: E1101 00:43:49.597538 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.597774 kubelet[2193]: W1101 00:43:49.597562 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.597774 kubelet[2193]: E1101 00:43:49.597596 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.598037 kubelet[2193]: E1101 00:43:49.597945 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.598037 kubelet[2193]: W1101 00:43:49.597961 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.598037 kubelet[2193]: E1101 00:43:49.597985 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.598378 kubelet[2193]: E1101 00:43:49.598341 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.598378 kubelet[2193]: W1101 00:43:49.598362 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.598537 kubelet[2193]: E1101 00:43:49.598387 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.598803 kubelet[2193]: E1101 00:43:49.598782 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.598803 kubelet[2193]: W1101 00:43:49.598801 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.598969 kubelet[2193]: E1101 00:43:49.598925 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.599493 kubelet[2193]: E1101 00:43:49.599467 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.599493 kubelet[2193]: W1101 00:43:49.599484 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.599682 kubelet[2193]: E1101 00:43:49.599509 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.599943 kubelet[2193]: E1101 00:43:49.599897 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.599943 kubelet[2193]: W1101 00:43:49.599917 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.599943 kubelet[2193]: E1101 00:43:49.599941 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.600573 kubelet[2193]: E1101 00:43:49.600526 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.600573 kubelet[2193]: W1101 00:43:49.600555 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.601033 kubelet[2193]: E1101 00:43:49.600581 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.602862 kubelet[2193]: E1101 00:43:49.602816 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.602862 kubelet[2193]: W1101 00:43:49.602838 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.603037 kubelet[2193]: E1101 00:43:49.602975 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.603389 kubelet[2193]: E1101 00:43:49.603350 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.603389 kubelet[2193]: W1101 00:43:49.603382 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.603696 kubelet[2193]: E1101 00:43:49.603536 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.603780 kubelet[2193]: E1101 00:43:49.603764 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.603780 kubelet[2193]: W1101 00:43:49.603777 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.604056 kubelet[2193]: E1101 00:43:49.603901 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.604138 kubelet[2193]: E1101 00:43:49.604100 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.604138 kubelet[2193]: W1101 00:43:49.604135 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.604280 kubelet[2193]: E1101 00:43:49.604156 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.604579 kubelet[2193]: E1101 00:43:49.604539 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.604579 kubelet[2193]: W1101 00:43:49.604557 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.604750 kubelet[2193]: E1101 00:43:49.604581 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.606100 kubelet[2193]: E1101 00:43:49.606038 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.606100 kubelet[2193]: W1101 00:43:49.606059 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.606279 kubelet[2193]: E1101 00:43:49.606250 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.606556 kubelet[2193]: E1101 00:43:49.606518 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.606556 kubelet[2193]: W1101 00:43:49.606536 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.606708 kubelet[2193]: E1101 00:43:49.606565 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.607004 kubelet[2193]: E1101 00:43:49.606968 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.607004 kubelet[2193]: W1101 00:43:49.606989 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.607416 kubelet[2193]: E1101 00:43:49.607008 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.607483 kubelet[2193]: E1101 00:43:49.607420 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.607483 kubelet[2193]: W1101 00:43:49.607446 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.607483 kubelet[2193]: E1101 00:43:49.607465 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.608203 kubelet[2193]: E1101 00:43:49.608126 2193 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:49.608203 kubelet[2193]: W1101 00:43:49.608146 2193 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:49.608203 kubelet[2193]: E1101 00:43:49.608163 2193 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:49.667114 env[1301]: time="2025-11-01T00:43:49.661115589Z" level=info msg="StartContainer for \"c4cc8284c0e0e2c2b0c5cbb01ca437c7c38e242a816cc2a5ac753ed562af21b7\" returns successfully" Nov 1 00:43:50.358023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4cc8284c0e0e2c2b0c5cbb01ca437c7c38e242a816cc2a5ac753ed562af21b7-rootfs.mount: Deactivated successfully. Nov 1 00:43:50.431064 env[1301]: time="2025-11-01T00:43:50.430827454Z" level=info msg="shim disconnected" id=c4cc8284c0e0e2c2b0c5cbb01ca437c7c38e242a816cc2a5ac753ed562af21b7 Nov 1 00:43:50.432223 env[1301]: time="2025-11-01T00:43:50.432124276Z" level=warning msg="cleaning up after shim disconnected" id=c4cc8284c0e0e2c2b0c5cbb01ca437c7c38e242a816cc2a5ac753ed562af21b7 namespace=k8s.io Nov 1 00:43:50.432887 env[1301]: time="2025-11-01T00:43:50.432330649Z" level=info msg="cleaning up dead shim" Nov 1 00:43:50.454369 env[1301]: time="2025-11-01T00:43:50.454289306Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:43:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2882 runtime=io.containerd.runc.v2\n" Nov 1 00:43:50.533763 kubelet[2193]: I1101 00:43:50.533720 2193 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:43:50.537527 env[1301]: time="2025-11-01T00:43:50.537476074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:43:51.392981 kubelet[2193]: E1101 00:43:51.392915 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fvm9x" podUID="1db94968-800e-4bd7-88c1-2551a090e4ab" Nov 1 00:43:53.391713 kubelet[2193]: E1101 00:43:53.391644 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fvm9x" podUID="1db94968-800e-4bd7-88c1-2551a090e4ab" Nov 1 00:43:54.641755 env[1301]: time="2025-11-01T00:43:54.641613923Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:54.644829 env[1301]: time="2025-11-01T00:43:54.644737004Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:54.647265 env[1301]: time="2025-11-01T00:43:54.647157578Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:54.649502 env[1301]: time="2025-11-01T00:43:54.649449777Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:54.650363 env[1301]: time="2025-11-01T00:43:54.650297206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 00:43:54.659722 env[1301]: time="2025-11-01T00:43:54.659660019Z" level=info msg="CreateContainer within sandbox \"e866939bc6b8fd56ef3e0ce1b7f23248df9102762d451478cca5e5a58b97ae53\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:43:54.683060 env[1301]: time="2025-11-01T00:43:54.682953812Z" level=info msg="CreateContainer within sandbox \"e866939bc6b8fd56ef3e0ce1b7f23248df9102762d451478cca5e5a58b97ae53\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b8081c0c7e1c2b330bcecc9c8fd9630d66b0611be92dcb5c4282ff539dcd6d81\"" Nov 1 00:43:54.686717 env[1301]: time="2025-11-01T00:43:54.684356879Z" level=info msg="StartContainer for \"b8081c0c7e1c2b330bcecc9c8fd9630d66b0611be92dcb5c4282ff539dcd6d81\"" Nov 1 00:43:54.735320 systemd[1]: run-containerd-runc-k8s.io-b8081c0c7e1c2b330bcecc9c8fd9630d66b0611be92dcb5c4282ff539dcd6d81-runc.Zpa3Lz.mount: Deactivated successfully. Nov 1 00:43:54.810566 env[1301]: time="2025-11-01T00:43:54.810451218Z" level=info msg="StartContainer for \"b8081c0c7e1c2b330bcecc9c8fd9630d66b0611be92dcb5c4282ff539dcd6d81\" returns successfully" Nov 1 00:43:55.392716 kubelet[2193]: E1101 00:43:55.392646 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fvm9x" podUID="1db94968-800e-4bd7-88c1-2551a090e4ab" Nov 1 00:43:55.909017 env[1301]: time="2025-11-01T00:43:55.908863216Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:43:55.932094 kubelet[2193]: I1101 00:43:55.932044 2193 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:43:55.953239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8081c0c7e1c2b330bcecc9c8fd9630d66b0611be92dcb5c4282ff539dcd6d81-rootfs.mount: Deactivated successfully. Nov 1 00:43:56.009011 kubelet[2193]: W1101 00:43:56.008919 2193 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' and this object Nov 1 00:43:56.009379 kubelet[2193]: E1101 00:43:56.009026 2193 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' and this object" logger="UnhandledError" Nov 1 00:43:56.009379 kubelet[2193]: I1101 00:43:56.009225 2193 status_manager.go:890] "Failed to get status for pod" podUID="74931096-7bc0-4134-a8b4-61ec9bf5e338" pod="calico-apiserver/calico-apiserver-699db95d94-lmk4w" err="pods \"calico-apiserver-699db95d94-lmk4w\" is forbidden: User \"system:node:ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' and this object" Nov 1 00:43:56.009630 kubelet[2193]: W1101 00:43:56.009523 2193 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' and this object Nov 1 00:43:56.009630 kubelet[2193]: E1101 00:43:56.009560 2193 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' and this object" logger="UnhandledError" Nov 1 00:43:56.013720 kubelet[2193]: W1101 00:43:56.013644 2193 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' and this object Nov 1 00:43:56.013720 kubelet[2193]: E1101 00:43:56.013717 2193 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' and this object" logger="UnhandledError" Nov 1 00:43:56.056237 kubelet[2193]: I1101 00:43:56.056106 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/74931096-7bc0-4134-a8b4-61ec9bf5e338-calico-apiserver-certs\") pod \"calico-apiserver-699db95d94-lmk4w\" (UID: \"74931096-7bc0-4134-a8b4-61ec9bf5e338\") " pod="calico-apiserver/calico-apiserver-699db95d94-lmk4w" Nov 1 00:43:56.056237 kubelet[2193]: I1101 00:43:56.056249 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2c78f36-7235-48aa-baae-2bd9c8a78b81-config-volume\") pod \"coredns-668d6bf9bc-7xkbs\" (UID: \"b2c78f36-7235-48aa-baae-2bd9c8a78b81\") " pod="kube-system/coredns-668d6bf9bc-7xkbs" Nov 1 00:43:56.056623 kubelet[2193]: I1101 00:43:56.056283 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb5676df-eb26-4a3d-9a39-dc277ac29b28-goldmane-ca-bundle\") pod \"goldmane-666569f655-xt7wl\" (UID: \"bb5676df-eb26-4a3d-9a39-dc277ac29b28\") " pod="calico-system/goldmane-666569f655-xt7wl" Nov 1 00:43:56.056623 kubelet[2193]: I1101 00:43:56.056318 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb5676df-eb26-4a3d-9a39-dc277ac29b28-config\") pod \"goldmane-666569f655-xt7wl\" (UID: \"bb5676df-eb26-4a3d-9a39-dc277ac29b28\") " pod="calico-system/goldmane-666569f655-xt7wl" Nov 1 00:43:56.056623 kubelet[2193]: I1101 00:43:56.056347 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kts9\" (UniqueName: \"kubernetes.io/projected/b2c78f36-7235-48aa-baae-2bd9c8a78b81-kube-api-access-2kts9\") pod \"coredns-668d6bf9bc-7xkbs\" (UID: \"b2c78f36-7235-48aa-baae-2bd9c8a78b81\") " pod="kube-system/coredns-668d6bf9bc-7xkbs" Nov 1 00:43:56.056623 kubelet[2193]: I1101 00:43:56.056376 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34b32444-f031-47a6-89b0-97775432ade7-tigera-ca-bundle\") pod \"calico-kube-controllers-f99bc94f9-dqr4b\" (UID: \"34b32444-f031-47a6-89b0-97775432ade7\") " pod="calico-system/calico-kube-controllers-f99bc94f9-dqr4b" Nov 1 00:43:56.056623 kubelet[2193]: I1101 00:43:56.056407 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4e6e4e0-2f67-459b-8ed4-30190e515a8d-config-volume\") pod \"coredns-668d6bf9bc-4bhnp\" (UID: \"a4e6e4e0-2f67-459b-8ed4-30190e515a8d\") " pod="kube-system/coredns-668d6bf9bc-4bhnp" Nov 1 00:43:56.057167 kubelet[2193]: I1101 00:43:56.056438 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn8kh\" (UniqueName: \"kubernetes.io/projected/a4e6e4e0-2f67-459b-8ed4-30190e515a8d-kube-api-access-tn8kh\") pod \"coredns-668d6bf9bc-4bhnp\" (UID: \"a4e6e4e0-2f67-459b-8ed4-30190e515a8d\") " pod="kube-system/coredns-668d6bf9bc-4bhnp" Nov 1 00:43:56.057167 kubelet[2193]: I1101 00:43:56.056486 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm7px\" (UniqueName: \"kubernetes.io/projected/bb5676df-eb26-4a3d-9a39-dc277ac29b28-kube-api-access-qm7px\") pod \"goldmane-666569f655-xt7wl\" (UID: \"bb5676df-eb26-4a3d-9a39-dc277ac29b28\") " pod="calico-system/goldmane-666569f655-xt7wl" Nov 1 00:43:56.057167 kubelet[2193]: I1101 00:43:56.056515 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phntk\" (UniqueName: \"kubernetes.io/projected/34b32444-f031-47a6-89b0-97775432ade7-kube-api-access-phntk\") pod \"calico-kube-controllers-f99bc94f9-dqr4b\" (UID: \"34b32444-f031-47a6-89b0-97775432ade7\") " pod="calico-system/calico-kube-controllers-f99bc94f9-dqr4b" Nov 1 00:43:56.057167 kubelet[2193]: I1101 00:43:56.056548 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bb5676df-eb26-4a3d-9a39-dc277ac29b28-goldmane-key-pair\") pod \"goldmane-666569f655-xt7wl\" (UID: \"bb5676df-eb26-4a3d-9a39-dc277ac29b28\") " pod="calico-system/goldmane-666569f655-xt7wl" Nov 1 00:43:56.057167 kubelet[2193]: I1101 00:43:56.056605 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb57m\" (UniqueName: \"kubernetes.io/projected/74931096-7bc0-4134-a8b4-61ec9bf5e338-kube-api-access-lb57m\") pod \"calico-apiserver-699db95d94-lmk4w\" (UID: \"74931096-7bc0-4134-a8b4-61ec9bf5e338\") " pod="calico-apiserver/calico-apiserver-699db95d94-lmk4w" Nov 1 00:43:56.157804 kubelet[2193]: I1101 00:43:56.157731 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/672f6371-8fa5-4e93-b64e-cf94de884875-whisker-ca-bundle\") pod \"whisker-5869c7ff56-5ffpp\" (UID: \"672f6371-8fa5-4e93-b64e-cf94de884875\") " pod="calico-system/whisker-5869c7ff56-5ffpp" Nov 1 00:43:56.158101 kubelet[2193]: I1101 00:43:56.157887 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4e92eb00-99ac-4f51-a076-ab8bc59ed374-calico-apiserver-certs\") pod \"calico-apiserver-699db95d94-pf9ws\" (UID: \"4e92eb00-99ac-4f51-a076-ab8bc59ed374\") " pod="calico-apiserver/calico-apiserver-699db95d94-pf9ws" Nov 1 00:43:56.158101 kubelet[2193]: I1101 00:43:56.157922 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/672f6371-8fa5-4e93-b64e-cf94de884875-whisker-backend-key-pair\") pod \"whisker-5869c7ff56-5ffpp\" (UID: \"672f6371-8fa5-4e93-b64e-cf94de884875\") " pod="calico-system/whisker-5869c7ff56-5ffpp" Nov 1 00:43:56.158101 kubelet[2193]: I1101 00:43:56.158050 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fbpk\" (UniqueName: \"kubernetes.io/projected/4e92eb00-99ac-4f51-a076-ab8bc59ed374-kube-api-access-4fbpk\") pod \"calico-apiserver-699db95d94-pf9ws\" (UID: \"4e92eb00-99ac-4f51-a076-ab8bc59ed374\") " pod="calico-apiserver/calico-apiserver-699db95d94-pf9ws" Nov 1 00:43:56.158101 kubelet[2193]: I1101 00:43:56.158081 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgxqf\" (UniqueName: \"kubernetes.io/projected/672f6371-8fa5-4e93-b64e-cf94de884875-kube-api-access-bgxqf\") pod \"whisker-5869c7ff56-5ffpp\" (UID: \"672f6371-8fa5-4e93-b64e-cf94de884875\") " pod="calico-system/whisker-5869c7ff56-5ffpp" Nov 1 00:43:56.321013 env[1301]: time="2025-11-01T00:43:56.320827169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f99bc94f9-dqr4b,Uid:34b32444-f031-47a6-89b0-97775432ade7,Namespace:calico-system,Attempt:0,}" Nov 1 00:43:56.335057 env[1301]: time="2025-11-01T00:43:56.334853981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xt7wl,Uid:bb5676df-eb26-4a3d-9a39-dc277ac29b28,Namespace:calico-system,Attempt:0,}" Nov 1 00:43:56.351958 env[1301]: time="2025-11-01T00:43:56.351867715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5869c7ff56-5ffpp,Uid:672f6371-8fa5-4e93-b64e-cf94de884875,Namespace:calico-system,Attempt:0,}" Nov 1 00:43:56.505925 env[1301]: time="2025-11-01T00:43:56.505847520Z" level=info msg="shim disconnected" id=b8081c0c7e1c2b330bcecc9c8fd9630d66b0611be92dcb5c4282ff539dcd6d81 Nov 1 00:43:56.505925 env[1301]: time="2025-11-01T00:43:56.505927339Z" level=warning msg="cleaning up after shim disconnected" id=b8081c0c7e1c2b330bcecc9c8fd9630d66b0611be92dcb5c4282ff539dcd6d81 namespace=k8s.io Nov 1 00:43:56.506398 env[1301]: time="2025-11-01T00:43:56.505950901Z" level=info msg="cleaning up dead shim" Nov 1 00:43:56.523651 env[1301]: time="2025-11-01T00:43:56.523582236Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:43:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2960 runtime=io.containerd.runc.v2\n" Nov 1 00:43:56.572227 env[1301]: time="2025-11-01T00:43:56.569142161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:43:56.720572 env[1301]: time="2025-11-01T00:43:56.720453369Z" level=error msg="Failed to destroy network for sandbox \"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:56.721154 env[1301]: time="2025-11-01T00:43:56.721095118Z" level=error msg="encountered an error cleaning up failed sandbox \"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:56.721319 env[1301]: time="2025-11-01T00:43:56.721210661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5869c7ff56-5ffpp,Uid:672f6371-8fa5-4e93-b64e-cf94de884875,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:56.723514 kubelet[2193]: E1101 00:43:56.721645 2193 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:56.723514 kubelet[2193]: E1101 00:43:56.721764 2193 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5869c7ff56-5ffpp" Nov 1 00:43:56.723514 kubelet[2193]: E1101 00:43:56.721802 2193 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5869c7ff56-5ffpp" Nov 1 00:43:56.725584 kubelet[2193]: E1101 00:43:56.721878 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5869c7ff56-5ffpp_calico-system(672f6371-8fa5-4e93-b64e-cf94de884875)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5869c7ff56-5ffpp_calico-system(672f6371-8fa5-4e93-b64e-cf94de884875)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5869c7ff56-5ffpp" podUID="672f6371-8fa5-4e93-b64e-cf94de884875" Nov 1 00:43:56.759383 env[1301]: time="2025-11-01T00:43:56.759286955Z" level=error msg="Failed to destroy network for sandbox \"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:56.759970 env[1301]: time="2025-11-01T00:43:56.759902716Z" level=error msg="encountered an error cleaning up failed sandbox \"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:56.760133 env[1301]: time="2025-11-01T00:43:56.760008264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f99bc94f9-dqr4b,Uid:34b32444-f031-47a6-89b0-97775432ade7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:56.760553 kubelet[2193]: E1101 00:43:56.760478 2193 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:56.760701 kubelet[2193]: E1101 00:43:56.760585 2193 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f99bc94f9-dqr4b" Nov 1 00:43:56.760701 kubelet[2193]: E1101 00:43:56.760624 2193 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f99bc94f9-dqr4b" Nov 1 00:43:56.760824 kubelet[2193]: E1101 00:43:56.760702 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f99bc94f9-dqr4b_calico-system(34b32444-f031-47a6-89b0-97775432ade7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f99bc94f9-dqr4b_calico-system(34b32444-f031-47a6-89b0-97775432ade7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f99bc94f9-dqr4b" podUID="34b32444-f031-47a6-89b0-97775432ade7" Nov 1 00:43:56.765360 env[1301]: time="2025-11-01T00:43:56.765272721Z" level=error msg="Failed to destroy network for sandbox \"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:56.765875 env[1301]: time="2025-11-01T00:43:56.765814987Z" level=error msg="encountered an error cleaning up failed sandbox \"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:56.766009 env[1301]: time="2025-11-01T00:43:56.765925733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xt7wl,Uid:bb5676df-eb26-4a3d-9a39-dc277ac29b28,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:56.766305 kubelet[2193]: E1101 00:43:56.766245 2193 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:56.766422 kubelet[2193]: E1101 00:43:56.766324 2193 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-xt7wl" Nov 1 00:43:56.766422 kubelet[2193]: E1101 00:43:56.766359 2193 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-xt7wl" Nov 1 00:43:56.766559 kubelet[2193]: E1101 00:43:56.766432 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-xt7wl_calico-system(bb5676df-eb26-4a3d-9a39-dc277ac29b28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-xt7wl_calico-system(bb5676df-eb26-4a3d-9a39-dc277ac29b28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-xt7wl" podUID="bb5676df-eb26-4a3d-9a39-dc277ac29b28" Nov 1 00:43:57.159410 kubelet[2193]: E1101 00:43:57.159359 2193 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:43:57.159671 kubelet[2193]: E1101 00:43:57.159493 2193 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b2c78f36-7235-48aa-baae-2bd9c8a78b81-config-volume podName:b2c78f36-7235-48aa-baae-2bd9c8a78b81 nodeName:}" failed. No retries permitted until 2025-11-01 00:43:57.659459692 +0000 UTC m=+40.606561060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b2c78f36-7235-48aa-baae-2bd9c8a78b81-config-volume") pod "coredns-668d6bf9bc-7xkbs" (UID: "b2c78f36-7235-48aa-baae-2bd9c8a78b81") : failed to sync configmap cache: timed out waiting for the condition Nov 1 00:43:57.161714 kubelet[2193]: E1101 00:43:57.160460 2193 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:43:57.161714 kubelet[2193]: E1101 00:43:57.160552 2193 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a4e6e4e0-2f67-459b-8ed4-30190e515a8d-config-volume podName:a4e6e4e0-2f67-459b-8ed4-30190e515a8d nodeName:}" failed. No retries permitted until 2025-11-01 00:43:57.660529383 +0000 UTC m=+40.607630742 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a4e6e4e0-2f67-459b-8ed4-30190e515a8d-config-volume") pod "coredns-668d6bf9bc-4bhnp" (UID: "a4e6e4e0-2f67-459b-8ed4-30190e515a8d") : failed to sync configmap cache: timed out waiting for the condition Nov 1 00:43:57.200577 kubelet[2193]: E1101 00:43:57.200499 2193 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:43:57.200577 kubelet[2193]: E1101 00:43:57.200592 2193 projected.go:194] Error preparing data for projected volume kube-api-access-lb57m for pod calico-apiserver/calico-apiserver-699db95d94-lmk4w: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:43:57.200992 kubelet[2193]: E1101 00:43:57.200704 2193 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74931096-7bc0-4134-a8b4-61ec9bf5e338-kube-api-access-lb57m podName:74931096-7bc0-4134-a8b4-61ec9bf5e338 nodeName:}" failed. No retries permitted until 2025-11-01 00:43:57.700672098 +0000 UTC m=+40.647773453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lb57m" (UniqueName: "kubernetes.io/projected/74931096-7bc0-4134-a8b4-61ec9bf5e338-kube-api-access-lb57m") pod "calico-apiserver-699db95d94-lmk4w" (UID: "74931096-7bc0-4134-a8b4-61ec9bf5e338") : failed to sync configmap cache: timed out waiting for the condition Nov 1 00:43:57.271661 kubelet[2193]: E1101 00:43:57.271573 2193 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:43:57.271661 kubelet[2193]: E1101 00:43:57.271663 2193 projected.go:194] Error preparing data for projected volume kube-api-access-4fbpk for pod calico-apiserver/calico-apiserver-699db95d94-pf9ws: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:43:57.272058 kubelet[2193]: E1101 00:43:57.271769 2193 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e92eb00-99ac-4f51-a076-ab8bc59ed374-kube-api-access-4fbpk podName:4e92eb00-99ac-4f51-a076-ab8bc59ed374 nodeName:}" failed. No retries permitted until 2025-11-01 00:43:57.771738107 +0000 UTC m=+40.718839482 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4fbpk" (UniqueName: "kubernetes.io/projected/4e92eb00-99ac-4f51-a076-ab8bc59ed374-kube-api-access-4fbpk") pod "calico-apiserver-699db95d94-pf9ws" (UID: "4e92eb00-99ac-4f51-a076-ab8bc59ed374") : failed to sync configmap cache: timed out waiting for the condition Nov 1 00:43:57.396811 env[1301]: time="2025-11-01T00:43:57.396653094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fvm9x,Uid:1db94968-800e-4bd7-88c1-2551a090e4ab,Namespace:calico-system,Attempt:0,}" Nov 1 00:43:57.524426 env[1301]: time="2025-11-01T00:43:57.523381546Z" level=error msg="Failed to destroy network for sandbox \"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:57.529689 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c-shm.mount: Deactivated successfully. Nov 1 00:43:57.534431 env[1301]: time="2025-11-01T00:43:57.534339461Z" level=error msg="encountered an error cleaning up failed sandbox \"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:57.534600 env[1301]: time="2025-11-01T00:43:57.534442298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fvm9x,Uid:1db94968-800e-4bd7-88c1-2551a090e4ab,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:57.534715 kubelet[2193]: E1101 00:43:57.534649 2193 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:57.534836 kubelet[2193]: E1101 00:43:57.534746 2193 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fvm9x" Nov 1 00:43:57.534836 kubelet[2193]: E1101 00:43:57.534784 2193 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fvm9x" Nov 1 00:43:57.534964 kubelet[2193]: E1101 00:43:57.534857 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fvm9x_calico-system(1db94968-800e-4bd7-88c1-2551a090e4ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fvm9x_calico-system(1db94968-800e-4bd7-88c1-2551a090e4ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fvm9x" podUID="1db94968-800e-4bd7-88c1-2551a090e4ab" Nov 1 00:43:57.580211 kubelet[2193]: I1101 00:43:57.580139 2193 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" Nov 1 00:43:57.583565 env[1301]: time="2025-11-01T00:43:57.582981824Z" level=info msg="StopPodSandbox for \"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3\"" Nov 1 00:43:57.602796 kubelet[2193]: I1101 00:43:57.602740 2193 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" Nov 1 00:43:57.604889 env[1301]: time="2025-11-01T00:43:57.604835109Z" level=info msg="StopPodSandbox for \"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c\"" Nov 1 00:43:57.606791 kubelet[2193]: I1101 00:43:57.606204 2193 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" Nov 1 00:43:57.607168 env[1301]: time="2025-11-01T00:43:57.607121780Z" level=info msg="StopPodSandbox for \"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3\"" Nov 1 00:43:57.630152 kubelet[2193]: I1101 00:43:57.630110 2193 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" Nov 1 00:43:57.656360 env[1301]: time="2025-11-01T00:43:57.656294589Z" level=info msg="StopPodSandbox for \"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6\"" Nov 1 00:43:57.751730 env[1301]: time="2025-11-01T00:43:57.751615789Z" level=error msg="StopPodSandbox for \"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3\" failed" error="failed to destroy network for sandbox \"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:57.752021 kubelet[2193]: E1101 00:43:57.751958 2193 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" Nov 1 00:43:57.752662 kubelet[2193]: E1101 00:43:57.752070 2193 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3"} Nov 1 00:43:57.752662 kubelet[2193]: E1101 00:43:57.752202 2193 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bb5676df-eb26-4a3d-9a39-dc277ac29b28\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:43:57.752662 kubelet[2193]: E1101 00:43:57.752245 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bb5676df-eb26-4a3d-9a39-dc277ac29b28\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-xt7wl" podUID="bb5676df-eb26-4a3d-9a39-dc277ac29b28" Nov 1 00:43:57.793195 env[1301]: time="2025-11-01T00:43:57.791734183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7xkbs,Uid:b2c78f36-7235-48aa-baae-2bd9c8a78b81,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:57.825143 env[1301]: time="2025-11-01T00:43:57.825072885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699db95d94-lmk4w,Uid:74931096-7bc0-4134-a8b4-61ec9bf5e338,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:43:57.875168 env[1301]: time="2025-11-01T00:43:57.875096341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699db95d94-pf9ws,Uid:4e92eb00-99ac-4f51-a076-ab8bc59ed374,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:43:57.886768 env[1301]: time="2025-11-01T00:43:57.886676722Z" level=error msg="StopPodSandbox for \"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c\" failed" error="failed to destroy network for sandbox \"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:57.913202 env[1301]: time="2025-11-01T00:43:57.913110456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4bhnp,Uid:a4e6e4e0-2f67-459b-8ed4-30190e515a8d,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:57.928209 kubelet[2193]: E1101 00:43:57.927839 2193 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" Nov 1 00:43:57.928209 kubelet[2193]: E1101 00:43:57.927970 2193 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c"} Nov 1 00:43:57.928209 kubelet[2193]: E1101 00:43:57.928046 2193 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1db94968-800e-4bd7-88c1-2551a090e4ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:43:57.928209 kubelet[2193]: E1101 00:43:57.928118 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1db94968-800e-4bd7-88c1-2551a090e4ab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fvm9x" podUID="1db94968-800e-4bd7-88c1-2551a090e4ab" Nov 1 00:43:57.985454 env[1301]: time="2025-11-01T00:43:57.985271772Z" level=error msg="StopPodSandbox for \"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3\" failed" error="failed to destroy network for sandbox \"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:58.013294 kubelet[2193]: E1101 00:43:58.013212 2193 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" Nov 1 00:43:58.013580 kubelet[2193]: E1101 00:43:58.013311 2193 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3"} Nov 1 00:43:58.013580 kubelet[2193]: E1101 00:43:58.013374 2193 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"34b32444-f031-47a6-89b0-97775432ade7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:43:58.013580 kubelet[2193]: E1101 00:43:58.013411 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"34b32444-f031-47a6-89b0-97775432ade7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f99bc94f9-dqr4b" podUID="34b32444-f031-47a6-89b0-97775432ade7" Nov 1 00:43:58.032609 env[1301]: time="2025-11-01T00:43:58.032517182Z" level=error msg="StopPodSandbox for \"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6\" failed" error="failed to destroy network for sandbox \"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:58.036416 kubelet[2193]: E1101 00:43:58.035961 2193 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" Nov 1 00:43:58.036416 kubelet[2193]: E1101 00:43:58.036135 2193 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6"} Nov 1 00:43:58.036416 kubelet[2193]: E1101 00:43:58.036238 2193 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"672f6371-8fa5-4e93-b64e-cf94de884875\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:43:58.036416 kubelet[2193]: E1101 00:43:58.036306 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"672f6371-8fa5-4e93-b64e-cf94de884875\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5869c7ff56-5ffpp" podUID="672f6371-8fa5-4e93-b64e-cf94de884875" Nov 1 00:43:58.251741 env[1301]: time="2025-11-01T00:43:58.251623072Z" level=error msg="Failed to destroy network for sandbox \"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:58.257068 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112-shm.mount: Deactivated successfully. Nov 1 00:43:58.260400 env[1301]: time="2025-11-01T00:43:58.260308207Z" level=error msg="encountered an error cleaning up failed sandbox \"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:58.260593 env[1301]: time="2025-11-01T00:43:58.260440827Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699db95d94-lmk4w,Uid:74931096-7bc0-4134-a8b4-61ec9bf5e338,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:58.260809 kubelet[2193]: E1101 00:43:58.260750 2193 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:58.260934 kubelet[2193]: E1101 00:43:58.260852 2193 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-699db95d94-lmk4w" Nov 1 00:43:58.260934 kubelet[2193]: E1101 00:43:58.260908 2193 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-699db95d94-lmk4w" Nov 1 00:43:58.261278 kubelet[2193]: E1101 00:43:58.260982 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-699db95d94-lmk4w_calico-apiserver(74931096-7bc0-4134-a8b4-61ec9bf5e338)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-699db95d94-lmk4w_calico-apiserver(74931096-7bc0-4134-a8b4-61ec9bf5e338)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-699db95d94-lmk4w" podUID="74931096-7bc0-4134-a8b4-61ec9bf5e338" Nov 1 00:43:58.289758 env[1301]: time="2025-11-01T00:43:58.289537170Z" level=error msg="Failed to destroy network for sandbox \"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:58.295713 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a-shm.mount: Deactivated successfully. Nov 1 00:43:58.297697 env[1301]: time="2025-11-01T00:43:58.297613081Z" level=error msg="encountered an error cleaning up failed sandbox \"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:58.297980 env[1301]: time="2025-11-01T00:43:58.297932945Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7xkbs,Uid:b2c78f36-7235-48aa-baae-2bd9c8a78b81,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:58.299805 kubelet[2193]: E1101 00:43:58.299701 2193 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:58.299951 kubelet[2193]: E1101 00:43:58.299839 2193 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7xkbs" Nov 1 00:43:58.299951 kubelet[2193]: E1101 00:43:58.299872 2193 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-7xkbs" Nov 1 00:43:58.300085 kubelet[2193]: E1101 00:43:58.299937 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-7xkbs_kube-system(b2c78f36-7235-48aa-baae-2bd9c8a78b81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-7xkbs_kube-system(b2c78f36-7235-48aa-baae-2bd9c8a78b81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7xkbs" podUID="b2c78f36-7235-48aa-baae-2bd9c8a78b81" Nov 1 00:43:58.343945 env[1301]: time="2025-11-01T00:43:58.343836782Z" level=error msg="Failed to destroy network for sandbox \"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:58.344773 env[1301]: time="2025-11-01T00:43:58.344694498Z" level=error msg="encountered an error cleaning up failed sandbox \"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:58.344949 env[1301]: time="2025-11-01T00:43:58.344820033Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699db95d94-pf9ws,Uid:4e92eb00-99ac-4f51-a076-ab8bc59ed374,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:58.345297 kubelet[2193]: E1101 00:43:58.345222 2193 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:58.345449 kubelet[2193]: E1101 00:43:58.345327 2193 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-699db95d94-pf9ws" Nov 1 00:43:58.345449 kubelet[2193]: E1101 00:43:58.345369 2193 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-699db95d94-pf9ws" Nov 1 00:43:58.345582 kubelet[2193]: E1101 00:43:58.345442 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-699db95d94-pf9ws_calico-apiserver(4e92eb00-99ac-4f51-a076-ab8bc59ed374)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-699db95d94-pf9ws_calico-apiserver(4e92eb00-99ac-4f51-a076-ab8bc59ed374)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-699db95d94-pf9ws" podUID="4e92eb00-99ac-4f51-a076-ab8bc59ed374" Nov 1 00:43:58.372910 env[1301]: time="2025-11-01T00:43:58.372793971Z" level=error msg="Failed to destroy network for sandbox \"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:58.373480 env[1301]: time="2025-11-01T00:43:58.373408386Z" level=error msg="encountered an error cleaning up failed sandbox \"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:58.373639 env[1301]: time="2025-11-01T00:43:58.373503106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4bhnp,Uid:a4e6e4e0-2f67-459b-8ed4-30190e515a8d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:58.373934 kubelet[2193]: E1101 00:43:58.373878 2193 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:58.374077 kubelet[2193]: E1101 00:43:58.373974 2193 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4bhnp" Nov 1 00:43:58.374077 kubelet[2193]: E1101 00:43:58.374021 2193 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4bhnp" Nov 1 00:43:58.374245 kubelet[2193]: E1101 00:43:58.374101 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4bhnp_kube-system(a4e6e4e0-2f67-459b-8ed4-30190e515a8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4bhnp_kube-system(a4e6e4e0-2f67-459b-8ed4-30190e515a8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4bhnp" podUID="a4e6e4e0-2f67-459b-8ed4-30190e515a8d" Nov 1 00:43:58.647721 kubelet[2193]: I1101 00:43:58.647661 2193 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" Nov 1 00:43:58.649585 env[1301]: time="2025-11-01T00:43:58.649489904Z" level=info msg="StopPodSandbox for \"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773\"" Nov 1 00:43:58.665899 kubelet[2193]: I1101 00:43:58.665826 2193 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" Nov 1 00:43:58.677634 env[1301]: time="2025-11-01T00:43:58.677524460Z" level=info msg="StopPodSandbox for \"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a\"" Nov 1 00:43:58.715892 kubelet[2193]: I1101 00:43:58.715818 2193 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" Nov 1 00:43:58.721880 env[1301]: time="2025-11-01T00:43:58.721808608Z" level=info msg="StopPodSandbox for \"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052\"" Nov 1 00:43:58.759728 kubelet[2193]: I1101 00:43:58.759641 2193 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" Nov 1 00:43:58.764548 env[1301]: time="2025-11-01T00:43:58.764471273Z" level=info msg="StopPodSandbox for \"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112\"" Nov 1 00:43:58.834736 env[1301]: time="2025-11-01T00:43:58.834643380Z" level=error msg="StopPodSandbox for \"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773\" failed" error="failed to destroy network for sandbox \"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:58.835502 kubelet[2193]: E1101 00:43:58.835378 2193 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" Nov 1 00:43:58.835700 kubelet[2193]: E1101 00:43:58.835568 2193 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773"} Nov 1 00:43:58.835700 kubelet[2193]: E1101 00:43:58.835680 2193 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a4e6e4e0-2f67-459b-8ed4-30190e515a8d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:43:58.835917 kubelet[2193]: E1101 00:43:58.835759 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a4e6e4e0-2f67-459b-8ed4-30190e515a8d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4bhnp" podUID="a4e6e4e0-2f67-459b-8ed4-30190e515a8d" Nov 1 00:43:58.866412 env[1301]: time="2025-11-01T00:43:58.866314197Z" level=error msg="StopPodSandbox for \"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a\" failed" error="failed to destroy network for sandbox \"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:58.867350 kubelet[2193]: E1101 00:43:58.867051 2193 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" Nov 1 00:43:58.867350 kubelet[2193]: E1101 00:43:58.867147 2193 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a"} Nov 1 00:43:58.867350 kubelet[2193]: E1101 00:43:58.867230 2193 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b2c78f36-7235-48aa-baae-2bd9c8a78b81\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:43:58.867350 kubelet[2193]: E1101 00:43:58.867278 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b2c78f36-7235-48aa-baae-2bd9c8a78b81\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-7xkbs" podUID="b2c78f36-7235-48aa-baae-2bd9c8a78b81" Nov 1 00:43:58.923048 env[1301]: time="2025-11-01T00:43:58.921418482Z" level=error msg="StopPodSandbox for \"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052\" failed" error="failed to destroy network for sandbox \"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:58.924017 kubelet[2193]: E1101 00:43:58.923675 2193 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" Nov 1 00:43:58.924017 kubelet[2193]: E1101 00:43:58.923775 2193 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052"} Nov 1 00:43:58.924017 kubelet[2193]: E1101 00:43:58.923890 2193 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4e92eb00-99ac-4f51-a076-ab8bc59ed374\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:43:58.924017 kubelet[2193]: E1101 00:43:58.923931 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4e92eb00-99ac-4f51-a076-ab8bc59ed374\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-699db95d94-pf9ws" podUID="4e92eb00-99ac-4f51-a076-ab8bc59ed374" Nov 1 00:43:58.944324 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773-shm.mount: Deactivated successfully. Nov 1 00:43:58.944602 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052-shm.mount: Deactivated successfully. Nov 1 00:43:58.956452 env[1301]: time="2025-11-01T00:43:58.956354770Z" level=error msg="StopPodSandbox for \"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112\" failed" error="failed to destroy network for sandbox \"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:58.956919 kubelet[2193]: E1101 00:43:58.956844 2193 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" Nov 1 00:43:58.957143 kubelet[2193]: E1101 00:43:58.956929 2193 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112"} Nov 1 00:43:58.957143 kubelet[2193]: E1101 00:43:58.956988 2193 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"74931096-7bc0-4134-a8b4-61ec9bf5e338\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:43:58.957143 kubelet[2193]: E1101 00:43:58.957033 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"74931096-7bc0-4134-a8b4-61ec9bf5e338\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-699db95d94-lmk4w" podUID="74931096-7bc0-4134-a8b4-61ec9bf5e338" Nov 1 00:44:04.768205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2256365758.mount: Deactivated successfully. Nov 1 00:44:04.810841 env[1301]: time="2025-11-01T00:44:04.810771564Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:04.813754 env[1301]: time="2025-11-01T00:44:04.813708998Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:04.815779 env[1301]: time="2025-11-01T00:44:04.815739937Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:04.817736 env[1301]: time="2025-11-01T00:44:04.817699379Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:44:04.818438 env[1301]: time="2025-11-01T00:44:04.818393492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 00:44:04.850855 env[1301]: time="2025-11-01T00:44:04.850789703Z" level=info msg="CreateContainer within sandbox \"e866939bc6b8fd56ef3e0ce1b7f23248df9102762d451478cca5e5a58b97ae53\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:44:04.877832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4018605680.mount: Deactivated successfully. Nov 1 00:44:04.883463 env[1301]: time="2025-11-01T00:44:04.883394596Z" level=info msg="CreateContainer within sandbox \"e866939bc6b8fd56ef3e0ce1b7f23248df9102762d451478cca5e5a58b97ae53\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2af32c85892ea59a0f32d6a0e075814b8e2f843effb3d9a803bf09d2a2ad08ef\"" Nov 1 00:44:04.886405 env[1301]: time="2025-11-01T00:44:04.886356894Z" level=info msg="StartContainer for \"2af32c85892ea59a0f32d6a0e075814b8e2f843effb3d9a803bf09d2a2ad08ef\"" Nov 1 00:44:04.965229 env[1301]: time="2025-11-01T00:44:04.961626260Z" level=info msg="StartContainer for \"2af32c85892ea59a0f32d6a0e075814b8e2f843effb3d9a803bf09d2a2ad08ef\" returns successfully" Nov 1 00:44:05.100126 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:44:05.100503 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:44:05.237432 env[1301]: time="2025-11-01T00:44:05.237362064Z" level=info msg="StopPodSandbox for \"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6\"" Nov 1 00:44:05.438151 env[1301]: 2025-11-01 00:44:05.359 [INFO][3382] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" Nov 1 00:44:05.438151 env[1301]: 2025-11-01 00:44:05.360 [INFO][3382] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" iface="eth0" netns="/var/run/netns/cni-50aa2036-cee3-16ba-3417-1428338898c4" Nov 1 00:44:05.438151 env[1301]: 2025-11-01 00:44:05.362 [INFO][3382] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" iface="eth0" netns="/var/run/netns/cni-50aa2036-cee3-16ba-3417-1428338898c4" Nov 1 00:44:05.438151 env[1301]: 2025-11-01 00:44:05.362 [INFO][3382] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" iface="eth0" netns="/var/run/netns/cni-50aa2036-cee3-16ba-3417-1428338898c4" Nov 1 00:44:05.438151 env[1301]: 2025-11-01 00:44:05.362 [INFO][3382] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" Nov 1 00:44:05.438151 env[1301]: 2025-11-01 00:44:05.362 [INFO][3382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" Nov 1 00:44:05.438151 env[1301]: 2025-11-01 00:44:05.422 [INFO][3390] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" HandleID="k8s-pod-network.534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--5869c7ff56--5ffpp-eth0" Nov 1 00:44:05.438151 env[1301]: 2025-11-01 00:44:05.423 [INFO][3390] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:05.438151 env[1301]: 2025-11-01 00:44:05.423 [INFO][3390] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:05.438151 env[1301]: 2025-11-01 00:44:05.430 [WARNING][3390] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" HandleID="k8s-pod-network.534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--5869c7ff56--5ffpp-eth0" Nov 1 00:44:05.438151 env[1301]: 2025-11-01 00:44:05.430 [INFO][3390] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" HandleID="k8s-pod-network.534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--5869c7ff56--5ffpp-eth0" Nov 1 00:44:05.438151 env[1301]: 2025-11-01 00:44:05.431 [INFO][3390] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:05.438151 env[1301]: 2025-11-01 00:44:05.435 [INFO][3382] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" Nov 1 00:44:05.438930 env[1301]: time="2025-11-01T00:44:05.438625891Z" level=info msg="TearDown network for sandbox \"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6\" successfully" Nov 1 00:44:05.438930 env[1301]: time="2025-11-01T00:44:05.438672868Z" level=info msg="StopPodSandbox for \"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6\" returns successfully" Nov 1 00:44:05.562130 kubelet[2193]: I1101 00:44:05.560533 2193 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/672f6371-8fa5-4e93-b64e-cf94de884875-whisker-backend-key-pair\") pod \"672f6371-8fa5-4e93-b64e-cf94de884875\" (UID: \"672f6371-8fa5-4e93-b64e-cf94de884875\") " Nov 1 00:44:05.562130 kubelet[2193]: I1101 00:44:05.560602 2193 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/672f6371-8fa5-4e93-b64e-cf94de884875-whisker-ca-bundle\") pod \"672f6371-8fa5-4e93-b64e-cf94de884875\" (UID: \"672f6371-8fa5-4e93-b64e-cf94de884875\") " Nov 1 00:44:05.562130 kubelet[2193]: I1101 00:44:05.560638 2193 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgxqf\" (UniqueName: \"kubernetes.io/projected/672f6371-8fa5-4e93-b64e-cf94de884875-kube-api-access-bgxqf\") pod \"672f6371-8fa5-4e93-b64e-cf94de884875\" (UID: \"672f6371-8fa5-4e93-b64e-cf94de884875\") " Nov 1 00:44:05.563374 kubelet[2193]: I1101 00:44:05.562012 2193 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/672f6371-8fa5-4e93-b64e-cf94de884875-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "672f6371-8fa5-4e93-b64e-cf94de884875" (UID: "672f6371-8fa5-4e93-b64e-cf94de884875"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:44:05.565914 kubelet[2193]: I1101 00:44:05.565872 2193 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/672f6371-8fa5-4e93-b64e-cf94de884875-kube-api-access-bgxqf" (OuterVolumeSpecName: "kube-api-access-bgxqf") pod "672f6371-8fa5-4e93-b64e-cf94de884875" (UID: "672f6371-8fa5-4e93-b64e-cf94de884875"). InnerVolumeSpecName "kube-api-access-bgxqf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:44:05.567599 kubelet[2193]: I1101 00:44:05.567560 2193 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/672f6371-8fa5-4e93-b64e-cf94de884875-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "672f6371-8fa5-4e93-b64e-cf94de884875" (UID: "672f6371-8fa5-4e93-b64e-cf94de884875"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:44:05.661336 kubelet[2193]: I1101 00:44:05.661284 2193 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/672f6371-8fa5-4e93-b64e-cf94de884875-whisker-ca-bundle\") on node \"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" DevicePath \"\"" Nov 1 00:44:05.661641 kubelet[2193]: I1101 00:44:05.661614 2193 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bgxqf\" (UniqueName: \"kubernetes.io/projected/672f6371-8fa5-4e93-b64e-cf94de884875-kube-api-access-bgxqf\") on node \"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" DevicePath \"\"" Nov 1 00:44:05.661776 kubelet[2193]: I1101 00:44:05.661748 2193 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/672f6371-8fa5-4e93-b64e-cf94de884875-whisker-backend-key-pair\") on node \"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" DevicePath \"\"" Nov 1 00:44:05.771042 systemd[1]: run-netns-cni\x2d50aa2036\x2dcee3\x2d16ba\x2d3417\x2d1428338898c4.mount: Deactivated successfully. Nov 1 00:44:05.773468 systemd[1]: var-lib-kubelet-pods-672f6371\x2d8fa5\x2d4e93\x2db64e\x2dcf94de884875-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbgxqf.mount: Deactivated successfully. Nov 1 00:44:05.773689 systemd[1]: var-lib-kubelet-pods-672f6371\x2d8fa5\x2d4e93\x2db64e\x2dcf94de884875-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:44:05.863208 kubelet[2193]: I1101 00:44:05.861367 2193 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2jntr" podStartSLOduration=2.101269867 podStartE2EDuration="20.861333127s" podCreationTimestamp="2025-11-01 00:43:45 +0000 UTC" firstStartedPulling="2025-11-01 00:43:46.059926191 +0000 UTC m=+29.007027563" lastFinishedPulling="2025-11-01 00:44:04.819989446 +0000 UTC m=+47.767090823" observedRunningTime="2025-11-01 00:44:05.834684965 +0000 UTC m=+48.781786348" watchObservedRunningTime="2025-11-01 00:44:05.861333127 +0000 UTC m=+48.808434512" Nov 1 00:44:05.926033 kubelet[2193]: I1101 00:44:05.925957 2193 status_manager.go:890] "Failed to get status for pod" podUID="bae1cc02-5d35-4e6c-8d44-6ad010de9d41" pod="calico-system/whisker-768cf9cc9d-2cqdw" err="pods \"whisker-768cf9cc9d-2cqdw\" is forbidden: User \"system:node:ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' and this object" Nov 1 00:44:05.926362 kubelet[2193]: W1101 00:44:05.926097 2193 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' and this object Nov 1 00:44:05.926362 kubelet[2193]: E1101 00:44:05.926140 2193 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' and this object" logger="UnhandledError" Nov 1 00:44:05.965210 kubelet[2193]: I1101 00:44:05.964887 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bae1cc02-5d35-4e6c-8d44-6ad010de9d41-whisker-ca-bundle\") pod \"whisker-768cf9cc9d-2cqdw\" (UID: \"bae1cc02-5d35-4e6c-8d44-6ad010de9d41\") " pod="calico-system/whisker-768cf9cc9d-2cqdw" Nov 1 00:44:05.965210 kubelet[2193]: I1101 00:44:05.965017 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkph4\" (UniqueName: \"kubernetes.io/projected/bae1cc02-5d35-4e6c-8d44-6ad010de9d41-kube-api-access-fkph4\") pod \"whisker-768cf9cc9d-2cqdw\" (UID: \"bae1cc02-5d35-4e6c-8d44-6ad010de9d41\") " pod="calico-system/whisker-768cf9cc9d-2cqdw" Nov 1 00:44:05.965210 kubelet[2193]: I1101 00:44:05.965105 2193 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bae1cc02-5d35-4e6c-8d44-6ad010de9d41-whisker-backend-key-pair\") pod \"whisker-768cf9cc9d-2cqdw\" (UID: \"bae1cc02-5d35-4e6c-8d44-6ad010de9d41\") " pod="calico-system/whisker-768cf9cc9d-2cqdw" Nov 1 00:44:06.660000 audit[3476]: AVC avc: denied { write } for pid=3476 comm="tee" name="fd" dev="proc" ino=24155 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:44:06.666943 kernel: kauditd_printk_skb: 20 callbacks suppressed Nov 1 00:44:06.667208 kernel: audit: type=1400 audit(1761957846.660:285): avc: denied { write } for pid=3476 comm="tee" name="fd" dev="proc" ino=24155 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:44:06.660000 audit[3476]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffcf5e6783 a2=241 a3=1b6 items=1 ppid=3446 pid=3476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:06.721211 kernel: audit: type=1300 audit(1761957846.660:285): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffcf5e6783 a2=241 a3=1b6 items=1 ppid=3446 pid=3476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:06.660000 audit: CWD cwd="/etc/service/enabled/bird6/log" Nov 1 00:44:06.730268 kernel: audit: type=1307 audit(1761957846.660:285): cwd="/etc/service/enabled/bird6/log" Nov 1 00:44:06.660000 audit: PATH item=0 name="/dev/fd/63" inode=25320 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:44:06.759208 kernel: audit: type=1302 audit(1761957846.660:285): item=0 name="/dev/fd/63" inode=25320 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:44:06.660000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:44:06.803209 kernel: audit: type=1327 audit(1761957846.660:285): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:44:06.718000 audit[3485]: AVC avc: denied { write } for pid=3485 comm="tee" name="fd" dev="proc" ino=24169 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:44:06.718000 audit[3485]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffce5772774 a2=241 a3=1b6 items=1 ppid=3440 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:06.889000 kernel: audit: type=1400 audit(1761957846.718:286): avc: denied { write } for pid=3485 comm="tee" name="fd" dev="proc" ino=24169 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:44:06.889211 kernel: audit: type=1300 audit(1761957846.718:286): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffce5772774 a2=241 a3=1b6 items=1 ppid=3440 pid=3485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:06.718000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Nov 1 00:44:06.918293 systemd[1]: run-containerd-runc-k8s.io-2af32c85892ea59a0f32d6a0e075814b8e2f843effb3d9a803bf09d2a2ad08ef-runc.rl6Szp.mount: Deactivated successfully. Nov 1 00:44:06.926214 kernel: audit: type=1307 audit(1761957846.718:286): cwd="/etc/service/enabled/node-status-reporter/log" Nov 1 00:44:06.718000 audit: PATH item=0 name="/dev/fd/63" inode=24151 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:44:06.955207 kernel: audit: type=1302 audit(1761957846.718:286): item=0 name="/dev/fd/63" inode=24151 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:44:06.718000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:44:06.978214 kernel: audit: type=1327 audit(1761957846.718:286): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:44:06.722000 audit[3487]: AVC avc: denied { write } for pid=3487 comm="tee" name="fd" dev="proc" ino=24173 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:44:06.722000 audit[3487]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe2999e784 a2=241 a3=1b6 items=1 ppid=3451 pid=3487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:06.722000 audit: CWD cwd="/etc/service/enabled/bird/log" Nov 1 00:44:06.722000 audit: PATH item=0 name="/dev/fd/63" inode=24152 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:44:06.722000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:44:06.745000 audit[3482]: AVC avc: denied { write } for pid=3482 comm="tee" name="fd" dev="proc" ino=24177 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:44:06.745000 audit[3482]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff7ae61773 a2=241 a3=1b6 items=1 ppid=3439 pid=3482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:06.745000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Nov 1 00:44:06.745000 audit: PATH item=0 name="/dev/fd/63" inode=24146 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:44:06.745000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:44:06.751000 audit[3514]: AVC avc: denied { write } for pid=3514 comm="tee" name="fd" dev="proc" ino=24181 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:44:06.751000 audit[3514]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd225be783 a2=241 a3=1b6 items=1 ppid=3452 pid=3514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:06.751000 audit: CWD cwd="/etc/service/enabled/felix/log" Nov 1 00:44:06.751000 audit: PATH item=0 name="/dev/fd/63" inode=24175 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:44:06.751000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:44:06.751000 audit[3479]: AVC avc: denied { write } for pid=3479 comm="tee" name="fd" dev="proc" ino=24183 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:44:06.751000 audit[3479]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff2d5d8785 a2=241 a3=1b6 items=1 ppid=3443 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:06.751000 audit: CWD cwd="/etc/service/enabled/cni/log" Nov 1 00:44:06.751000 audit: PATH item=0 name="/dev/fd/63" inode=25323 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:44:06.751000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:44:06.759000 audit[3512]: AVC avc: denied { write } for pid=3512 comm="tee" name="fd" dev="proc" ino=24187 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:44:06.759000 audit[3512]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc3f3c6783 a2=241 a3=1b6 items=1 ppid=3448 pid=3512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:06.759000 audit: CWD cwd="/etc/service/enabled/confd/log" Nov 1 00:44:06.759000 audit: PATH item=0 name="/dev/fd/63" inode=24162 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:44:06.759000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:44:07.067488 kubelet[2193]: E1101 00:44:07.067437 2193 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Nov 1 00:44:07.068284 kubelet[2193]: E1101 00:44:07.067603 2193 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bae1cc02-5d35-4e6c-8d44-6ad010de9d41-whisker-backend-key-pair podName:bae1cc02-5d35-4e6c-8d44-6ad010de9d41 nodeName:}" failed. No retries permitted until 2025-11-01 00:44:07.56757089 +0000 UTC m=+50.514672274 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/bae1cc02-5d35-4e6c-8d44-6ad010de9d41-whisker-backend-key-pair") pod "whisker-768cf9cc9d-2cqdw" (UID: "bae1cc02-5d35-4e6c-8d44-6ad010de9d41") : failed to sync secret cache: timed out waiting for the condition Nov 1 00:44:07.320838 kubelet[2193]: I1101 00:44:07.320154 2193 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:44:07.381000 audit[3544]: NETFILTER_CFG table=filter:103 family=2 entries=21 op=nft_register_rule pid=3544 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:07.381000 audit[3544]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdf25cfdf0 a2=0 a3=7ffdf25cfddc items=0 ppid=2355 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:07.381000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:07.387000 audit[3544]: NETFILTER_CFG table=nat:104 family=2 entries=19 op=nft_register_chain pid=3544 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:07.387000 audit[3544]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffdf25cfdf0 a2=0 a3=7ffdf25cfddc items=0 ppid=2355 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:07.387000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:07.397540 kubelet[2193]: I1101 00:44:07.397471 2193 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="672f6371-8fa5-4e93-b64e-cf94de884875" path="/var/lib/kubelet/pods/672f6371-8fa5-4e93-b64e-cf94de884875/volumes" Nov 1 00:44:07.732098 env[1301]: time="2025-11-01T00:44:07.731928188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-768cf9cc9d-2cqdw,Uid:bae1cc02-5d35-4e6c-8d44-6ad010de9d41,Namespace:calico-system,Attempt:0,}" Nov 1 00:44:07.907604 systemd-networkd[1062]: cali1045c9cbfbc: Link UP Nov 1 00:44:07.922940 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:44:07.923083 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1045c9cbfbc: link becomes ready Nov 1 00:44:07.924731 systemd-networkd[1062]: cali1045c9cbfbc: Gained carrier Nov 1 00:44:07.949344 env[1301]: 2025-11-01 00:44:07.790 [INFO][3549] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:44:07.949344 env[1301]: 2025-11-01 00:44:07.805 [INFO][3549] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--768cf9cc9d--2cqdw-eth0 whisker-768cf9cc9d- calico-system bae1cc02-5d35-4e6c-8d44-6ad010de9d41 934 0 2025-11-01 00:44:05 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:768cf9cc9d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762 whisker-768cf9cc9d-2cqdw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1045c9cbfbc [] [] }} ContainerID="c207fc8a1d8ce3b4eba23448f48a3d1dfa7c3b9134e6ab85755a391d5feee048" Namespace="calico-system" Pod="whisker-768cf9cc9d-2cqdw" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--768cf9cc9d--2cqdw-" Nov 1 00:44:07.949344 env[1301]: 2025-11-01 00:44:07.805 [INFO][3549] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c207fc8a1d8ce3b4eba23448f48a3d1dfa7c3b9134e6ab85755a391d5feee048" Namespace="calico-system" Pod="whisker-768cf9cc9d-2cqdw" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--768cf9cc9d--2cqdw-eth0" Nov 1 00:44:07.949344 env[1301]: 2025-11-01 00:44:07.838 [INFO][3562] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c207fc8a1d8ce3b4eba23448f48a3d1dfa7c3b9134e6ab85755a391d5feee048" HandleID="k8s-pod-network.c207fc8a1d8ce3b4eba23448f48a3d1dfa7c3b9134e6ab85755a391d5feee048" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--768cf9cc9d--2cqdw-eth0" Nov 1 00:44:07.949344 env[1301]: 2025-11-01 00:44:07.839 [INFO][3562] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c207fc8a1d8ce3b4eba23448f48a3d1dfa7c3b9134e6ab85755a391d5feee048" HandleID="k8s-pod-network.c207fc8a1d8ce3b4eba23448f48a3d1dfa7c3b9134e6ab85755a391d5feee048" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--768cf9cc9d--2cqdw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5070), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", "pod":"whisker-768cf9cc9d-2cqdw", "timestamp":"2025-11-01 00:44:07.838952962 +0000 UTC"}, Hostname:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:44:07.949344 env[1301]: 2025-11-01 00:44:07.839 [INFO][3562] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:07.949344 env[1301]: 2025-11-01 00:44:07.839 [INFO][3562] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:07.949344 env[1301]: 2025-11-01 00:44:07.839 [INFO][3562] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' Nov 1 00:44:07.949344 env[1301]: 2025-11-01 00:44:07.849 [INFO][3562] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c207fc8a1d8ce3b4eba23448f48a3d1dfa7c3b9134e6ab85755a391d5feee048" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:07.949344 env[1301]: 2025-11-01 00:44:07.856 [INFO][3562] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:07.949344 env[1301]: 2025-11-01 00:44:07.861 [INFO][3562] ipam/ipam.go 511: Trying affinity for 192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:07.949344 env[1301]: 2025-11-01 00:44:07.864 [INFO][3562] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:07.949344 env[1301]: 2025-11-01 00:44:07.866 [INFO][3562] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:07.949344 env[1301]: 2025-11-01 00:44:07.866 [INFO][3562] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.c207fc8a1d8ce3b4eba23448f48a3d1dfa7c3b9134e6ab85755a391d5feee048" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:07.949344 env[1301]: 2025-11-01 00:44:07.868 [INFO][3562] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c207fc8a1d8ce3b4eba23448f48a3d1dfa7c3b9134e6ab85755a391d5feee048 Nov 1 00:44:07.949344 env[1301]: 2025-11-01 00:44:07.874 [INFO][3562] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.c207fc8a1d8ce3b4eba23448f48a3d1dfa7c3b9134e6ab85755a391d5feee048" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:07.949344 env[1301]: 2025-11-01 00:44:07.881 [INFO][3562] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.65/26] block=192.168.97.64/26 handle="k8s-pod-network.c207fc8a1d8ce3b4eba23448f48a3d1dfa7c3b9134e6ab85755a391d5feee048" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:07.949344 env[1301]: 2025-11-01 00:44:07.881 [INFO][3562] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.65/26] handle="k8s-pod-network.c207fc8a1d8ce3b4eba23448f48a3d1dfa7c3b9134e6ab85755a391d5feee048" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:07.949344 env[1301]: 2025-11-01 00:44:07.881 [INFO][3562] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:07.949344 env[1301]: 2025-11-01 00:44:07.881 [INFO][3562] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.65/26] IPv6=[] ContainerID="c207fc8a1d8ce3b4eba23448f48a3d1dfa7c3b9134e6ab85755a391d5feee048" HandleID="k8s-pod-network.c207fc8a1d8ce3b4eba23448f48a3d1dfa7c3b9134e6ab85755a391d5feee048" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--768cf9cc9d--2cqdw-eth0" Nov 1 00:44:07.950596 env[1301]: 2025-11-01 00:44:07.883 [INFO][3549] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c207fc8a1d8ce3b4eba23448f48a3d1dfa7c3b9134e6ab85755a391d5feee048" Namespace="calico-system" Pod="whisker-768cf9cc9d-2cqdw" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--768cf9cc9d--2cqdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--768cf9cc9d--2cqdw-eth0", GenerateName:"whisker-768cf9cc9d-", Namespace:"calico-system", SelfLink:"", UID:"bae1cc02-5d35-4e6c-8d44-6ad010de9d41", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"768cf9cc9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"", Pod:"whisker-768cf9cc9d-2cqdw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.97.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1045c9cbfbc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:07.950596 env[1301]: 2025-11-01 00:44:07.884 [INFO][3549] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.65/32] ContainerID="c207fc8a1d8ce3b4eba23448f48a3d1dfa7c3b9134e6ab85755a391d5feee048" Namespace="calico-system" Pod="whisker-768cf9cc9d-2cqdw" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--768cf9cc9d--2cqdw-eth0" Nov 1 00:44:07.950596 env[1301]: 2025-11-01 00:44:07.884 [INFO][3549] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1045c9cbfbc ContainerID="c207fc8a1d8ce3b4eba23448f48a3d1dfa7c3b9134e6ab85755a391d5feee048" Namespace="calico-system" Pod="whisker-768cf9cc9d-2cqdw" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--768cf9cc9d--2cqdw-eth0" Nov 1 00:44:07.950596 env[1301]: 2025-11-01 00:44:07.926 [INFO][3549] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c207fc8a1d8ce3b4eba23448f48a3d1dfa7c3b9134e6ab85755a391d5feee048" Namespace="calico-system" Pod="whisker-768cf9cc9d-2cqdw" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--768cf9cc9d--2cqdw-eth0" Nov 1 00:44:07.950596 env[1301]: 2025-11-01 00:44:07.926 [INFO][3549] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c207fc8a1d8ce3b4eba23448f48a3d1dfa7c3b9134e6ab85755a391d5feee048" Namespace="calico-system" Pod="whisker-768cf9cc9d-2cqdw" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--768cf9cc9d--2cqdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--768cf9cc9d--2cqdw-eth0", GenerateName:"whisker-768cf9cc9d-", Namespace:"calico-system", SelfLink:"", UID:"bae1cc02-5d35-4e6c-8d44-6ad010de9d41", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 44, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"768cf9cc9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"c207fc8a1d8ce3b4eba23448f48a3d1dfa7c3b9134e6ab85755a391d5feee048", Pod:"whisker-768cf9cc9d-2cqdw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.97.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1045c9cbfbc", MAC:"6e:63:63:2d:49:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:07.950596 env[1301]: 2025-11-01 00:44:07.943 [INFO][3549] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c207fc8a1d8ce3b4eba23448f48a3d1dfa7c3b9134e6ab85755a391d5feee048" Namespace="calico-system" Pod="whisker-768cf9cc9d-2cqdw" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--768cf9cc9d--2cqdw-eth0" Nov 1 00:44:07.972775 env[1301]: time="2025-11-01T00:44:07.972651689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:07.972775 env[1301]: time="2025-11-01T00:44:07.972714121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:07.974102 env[1301]: time="2025-11-01T00:44:07.972736147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:07.974102 env[1301]: time="2025-11-01T00:44:07.972976273Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c207fc8a1d8ce3b4eba23448f48a3d1dfa7c3b9134e6ab85755a391d5feee048 pid=3584 runtime=io.containerd.runc.v2 Nov 1 00:44:08.088777 env[1301]: time="2025-11-01T00:44:08.088715711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-768cf9cc9d-2cqdw,Uid:bae1cc02-5d35-4e6c-8d44-6ad010de9d41,Namespace:calico-system,Attempt:0,} returns sandbox id \"c207fc8a1d8ce3b4eba23448f48a3d1dfa7c3b9134e6ab85755a391d5feee048\"" Nov 1 00:44:08.099387 env[1301]: time="2025-11-01T00:44:08.099331851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:44:08.296115 env[1301]: time="2025-11-01T00:44:08.296022960Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:08.298366 env[1301]: time="2025-11-01T00:44:08.298153767Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:44:08.298974 kubelet[2193]: E1101 00:44:08.298902 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:44:08.299570 kubelet[2193]: E1101 00:44:08.298992 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:44:08.299653 kubelet[2193]: E1101 00:44:08.299239 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:082e0ea68660410580a68d5ee8e902f4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fkph4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-768cf9cc9d-2cqdw_calico-system(bae1cc02-5d35-4e6c-8d44-6ad010de9d41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:08.301878 env[1301]: time="2025-11-01T00:44:08.301828463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:44:08.500436 env[1301]: time="2025-11-01T00:44:08.500234014Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:08.503140 env[1301]: time="2025-11-01T00:44:08.503058276Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:44:08.503897 kubelet[2193]: E1101 00:44:08.503791 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:44:08.503897 kubelet[2193]: E1101 00:44:08.503869 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:44:08.504103 kubelet[2193]: E1101 00:44:08.504038 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkph4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-768cf9cc9d-2cqdw_calico-system(bae1cc02-5d35-4e6c-8d44-6ad010de9d41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:08.505551 kubelet[2193]: E1101 00:44:08.505478 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768cf9cc9d-2cqdw" podUID="bae1cc02-5d35-4e6c-8d44-6ad010de9d41" Nov 1 00:44:08.708000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.708000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.708000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.708000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.708000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.708000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.708000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.708000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.708000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.708000 audit: BPF prog-id=10 op=LOAD Nov 1 00:44:08.708000 audit[3674]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6393ac00 a2=98 a3=1fffffffffffffff items=0 ppid=3639 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.708000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:44:08.710000 audit: BPF prog-id=10 op=UNLOAD Nov 1 00:44:08.710000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.710000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.710000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.710000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.710000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.710000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.710000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.710000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.710000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.710000 audit: BPF prog-id=11 op=LOAD Nov 1 00:44:08.710000 audit[3674]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6393aae0 a2=94 a3=3 items=0 ppid=3639 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.710000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:44:08.711000 audit: BPF prog-id=11 op=UNLOAD Nov 1 00:44:08.711000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.711000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.711000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.711000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.711000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.711000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.711000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.711000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.711000 audit[3674]: AVC avc: denied { bpf } for pid=3674 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.711000 audit: BPF prog-id=12 op=LOAD Nov 1 00:44:08.711000 audit[3674]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff6393ab20 a2=94 a3=7fff6393ad00 items=0 ppid=3639 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.711000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:44:08.713000 audit: BPF prog-id=12 op=UNLOAD Nov 1 00:44:08.713000 audit[3674]: AVC avc: denied { perfmon } for pid=3674 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.713000 audit[3674]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7fff6393abf0 a2=50 a3=a000000085 items=0 ppid=3639 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.713000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:44:08.716000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.716000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.716000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.716000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.716000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.716000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.716000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.716000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.716000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.716000 audit: BPF prog-id=13 op=LOAD Nov 1 00:44:08.716000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc0426d910 a2=98 a3=3 items=0 ppid=3639 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.716000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:08.718000 audit: BPF prog-id=13 op=UNLOAD Nov 1 00:44:08.718000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.718000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.718000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.718000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.718000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.718000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.718000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.718000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.718000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.718000 audit: BPF prog-id=14 op=LOAD Nov 1 00:44:08.718000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc0426d700 a2=94 a3=54428f items=0 ppid=3639 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.718000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:08.720000 audit: BPF prog-id=14 op=UNLOAD Nov 1 00:44:08.720000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.720000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.720000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.720000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.720000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.720000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.720000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.720000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.720000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.720000 audit: BPF prog-id=15 op=LOAD Nov 1 00:44:08.720000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc0426d730 a2=94 a3=2 items=0 ppid=3639 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.720000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:08.721000 audit: BPF prog-id=15 op=UNLOAD Nov 1 00:44:08.814436 kubelet[2193]: E1101 00:44:08.814371 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768cf9cc9d-2cqdw" podUID="bae1cc02-5d35-4e6c-8d44-6ad010de9d41" Nov 1 00:44:08.848000 audit[3677]: NETFILTER_CFG table=filter:105 family=2 entries=20 op=nft_register_rule pid=3677 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:08.848000 audit[3677]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff3457d340 a2=0 a3=7fff3457d32c items=0 ppid=2355 pid=3677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.848000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:08.851000 audit[3677]: NETFILTER_CFG table=nat:106 family=2 entries=14 op=nft_register_rule pid=3677 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:08.851000 audit[3677]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff3457d340 a2=0 a3=0 items=0 ppid=2355 pid=3677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.851000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:08.916000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.916000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.916000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.916000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.916000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.916000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.916000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.916000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.916000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.916000 audit: BPF prog-id=16 op=LOAD Nov 1 00:44:08.916000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc0426d5f0 a2=94 a3=1 items=0 ppid=3639 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.916000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:08.916000 audit: BPF prog-id=16 op=UNLOAD Nov 1 00:44:08.916000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.916000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffc0426d6c0 a2=50 a3=7ffc0426d7a0 items=0 ppid=3639 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.916000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:08.929000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.929000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc0426d600 a2=28 a3=0 items=0 ppid=3639 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.929000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:08.929000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.929000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc0426d630 a2=28 a3=0 items=0 ppid=3639 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.929000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:08.929000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.929000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc0426d540 a2=28 a3=0 items=0 ppid=3639 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.929000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:08.929000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.929000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc0426d650 a2=28 a3=0 items=0 ppid=3639 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.929000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:08.929000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.929000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc0426d630 a2=28 a3=0 items=0 ppid=3639 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.929000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:08.929000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.929000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc0426d620 a2=28 a3=0 items=0 ppid=3639 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.929000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:08.929000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.929000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc0426d650 a2=28 a3=0 items=0 ppid=3639 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.929000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:08.929000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.929000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc0426d630 a2=28 a3=0 items=0 ppid=3639 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.929000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:08.929000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.929000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc0426d650 a2=28 a3=0 items=0 ppid=3639 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.929000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:08.929000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.929000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc0426d620 a2=28 a3=0 items=0 ppid=3639 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.929000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:08.929000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.929000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc0426d690 a2=28 a3=0 items=0 ppid=3639 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.929000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc0426d440 a2=50 a3=1 items=0 ppid=3639 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.930000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit: BPF prog-id=17 op=LOAD Nov 1 00:44:08.930000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc0426d440 a2=94 a3=5 items=0 ppid=3639 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.930000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:08.930000 audit: BPF prog-id=17 op=UNLOAD Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc0426d4f0 a2=50 a3=1 items=0 ppid=3639 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.930000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffc0426d610 a2=4 a3=38 items=0 ppid=3639 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.930000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { confidentiality } for pid=3675 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:44:08.930000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc0426d660 a2=94 a3=6 items=0 ppid=3639 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.930000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.930000 audit[3675]: AVC avc: denied { confidentiality } for pid=3675 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:44:08.930000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc0426ce10 a2=94 a3=88 items=0 ppid=3639 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.930000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:08.931000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.931000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.931000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.931000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.931000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.931000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.931000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.931000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.931000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.931000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.931000 audit[3675]: AVC avc: denied { confidentiality } for pid=3675 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:44:08.931000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc0426ce10 a2=94 a3=88 items=0 ppid=3639 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.931000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit: BPF prog-id=18 op=LOAD Nov 1 00:44:08.944000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec705e4e0 a2=98 a3=1999999999999999 items=0 ppid=3639 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.944000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 00:44:08.944000 audit: BPF prog-id=18 op=UNLOAD Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit: BPF prog-id=19 op=LOAD Nov 1 00:44:08.944000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec705e3c0 a2=94 a3=ffff items=0 ppid=3639 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.944000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 00:44:08.944000 audit: BPF prog-id=19 op=UNLOAD Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { perfmon } for pid=3680 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit[3680]: AVC avc: denied { bpf } for pid=3680 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:08.944000 audit: BPF prog-id=20 op=LOAD Nov 1 00:44:08.944000 audit[3680]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec705e400 a2=94 a3=7ffec705e5e0 items=0 ppid=3639 pid=3680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.944000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 00:44:08.944000 audit: BPF prog-id=20 op=UNLOAD Nov 1 00:44:09.239709 systemd-networkd[1062]: vxlan.calico: Link UP Nov 1 00:44:09.239723 systemd-networkd[1062]: vxlan.calico: Gained carrier Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit: BPF prog-id=21 op=LOAD Nov 1 00:44:09.277000 audit[3703]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdf4012100 a2=98 a3=0 items=0 ppid=3639 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:09.277000 audit: BPF prog-id=21 op=UNLOAD Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit: BPF prog-id=22 op=LOAD Nov 1 00:44:09.277000 audit[3703]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdf4011f10 a2=94 a3=54428f items=0 ppid=3639 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:09.277000 audit: BPF prog-id=22 op=UNLOAD Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit: BPF prog-id=23 op=LOAD Nov 1 00:44:09.277000 audit[3703]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdf4011f40 a2=94 a3=2 items=0 ppid=3639 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:09.277000 audit: BPF prog-id=23 op=UNLOAD Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdf4011e10 a2=28 a3=0 items=0 ppid=3639 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdf4011e40 a2=28 a3=0 items=0 ppid=3639 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdf4011d50 a2=28 a3=0 items=0 ppid=3639 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdf4011e60 a2=28 a3=0 items=0 ppid=3639 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdf4011e40 a2=28 a3=0 items=0 ppid=3639 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:09.277000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.277000 audit[3703]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdf4011e30 a2=28 a3=0 items=0 ppid=3639 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:09.284000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.284000 audit[3703]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdf4011e60 a2=28 a3=0 items=0 ppid=3639 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.284000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:09.284000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.284000 audit[3703]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdf4011e40 a2=28 a3=0 items=0 ppid=3639 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.284000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:09.284000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.284000 audit[3703]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdf4011e60 a2=28 a3=0 items=0 ppid=3639 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.284000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:09.284000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.284000 audit[3703]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdf4011e30 a2=28 a3=0 items=0 ppid=3639 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.284000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:09.284000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.284000 audit[3703]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdf4011ea0 a2=28 a3=0 items=0 ppid=3639 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.284000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:09.284000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.284000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.284000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.284000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.284000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.284000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.284000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.284000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.284000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.284000 audit: BPF prog-id=24 op=LOAD Nov 1 00:44:09.284000 audit[3703]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdf4011d10 a2=94 a3=0 items=0 ppid=3639 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.284000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:09.284000 audit: BPF prog-id=24 op=UNLOAD Nov 1 00:44:09.285000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.285000 audit[3703]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffdf4011d00 a2=50 a3=2800 items=0 ppid=3639 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.285000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:09.293000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.293000 audit[3703]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffdf4011d00 a2=50 a3=2800 items=0 ppid=3639 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.293000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:09.293000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.293000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.293000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.293000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.293000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.293000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.293000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.293000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.293000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.293000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.293000 audit: BPF prog-id=25 op=LOAD Nov 1 00:44:09.293000 audit[3703]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdf4011520 a2=94 a3=2 items=0 ppid=3639 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.293000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:09.293000 audit: BPF prog-id=25 op=UNLOAD Nov 1 00:44:09.293000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.293000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.293000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.293000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.293000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.293000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.293000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.293000 audit[3703]: AVC avc: denied { perfmon } for pid=3703 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.293000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.293000 audit[3703]: AVC avc: denied { bpf } for pid=3703 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.293000 audit: BPF prog-id=26 op=LOAD Nov 1 00:44:09.293000 audit[3703]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdf4011620 a2=94 a3=30 items=0 ppid=3639 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.293000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:44:09.307000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.307000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.307000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.307000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.307000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.307000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.307000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.307000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.307000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.307000 audit: BPF prog-id=27 op=LOAD Nov 1 00:44:09.307000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd64073890 a2=98 a3=0 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.307000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.308000 audit: BPF prog-id=27 op=UNLOAD Nov 1 00:44:09.308000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.308000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.308000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.308000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.308000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.308000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.308000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.308000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.308000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.308000 audit: BPF prog-id=28 op=LOAD Nov 1 00:44:09.308000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd64073680 a2=94 a3=54428f items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.308000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.309000 audit: BPF prog-id=28 op=UNLOAD Nov 1 00:44:09.309000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.309000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.309000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.309000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.309000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.309000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.309000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.309000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.309000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.309000 audit: BPF prog-id=29 op=LOAD Nov 1 00:44:09.309000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd640736b0 a2=94 a3=2 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.309000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.309000 audit: BPF prog-id=29 op=UNLOAD Nov 1 00:44:09.444000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.444000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.444000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.444000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.444000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.444000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.444000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.444000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.444000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.444000 audit: BPF prog-id=30 op=LOAD Nov 1 00:44:09.444000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd64073570 a2=94 a3=1 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.444000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.444000 audit: BPF prog-id=30 op=UNLOAD Nov 1 00:44:09.444000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.444000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffd64073640 a2=50 a3=7ffd64073720 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.444000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd64073580 a2=28 a3=0 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.461000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd640735b0 a2=28 a3=0 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.461000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd640734c0 a2=28 a3=0 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.461000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd640735d0 a2=28 a3=0 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.461000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd640735b0 a2=28 a3=0 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.461000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd640735a0 a2=28 a3=0 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.461000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd640735d0 a2=28 a3=0 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.461000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd640735b0 a2=28 a3=0 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.461000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd640735d0 a2=28 a3=0 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.461000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd640735a0 a2=28 a3=0 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.461000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd64073610 a2=28 a3=0 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.461000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd640733c0 a2=50 a3=1 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.461000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit: BPF prog-id=31 op=LOAD Nov 1 00:44:09.461000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd640733c0 a2=94 a3=5 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.461000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.461000 audit: BPF prog-id=31 op=UNLOAD Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd64073470 a2=50 a3=1 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.461000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffd64073590 a2=4 a3=38 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.461000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.461000 audit[3709]: AVC avc: denied { confidentiality } for pid=3709 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:44:09.461000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd640735e0 a2=94 a3=6 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.461000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.462000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.462000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.462000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.462000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.462000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.462000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.462000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.462000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.462000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.462000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.462000 audit[3709]: AVC avc: denied { confidentiality } for pid=3709 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:44:09.462000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd64072d90 a2=94 a3=88 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.462000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.462000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.462000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.462000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.462000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.462000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.462000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.462000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.462000 audit[3709]: AVC avc: denied { perfmon } for pid=3709 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.462000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd64072d90 a2=94 a3=88 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.462000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.463000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.463000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd640747c0 a2=10 a3=f8f00800 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.463000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.463000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.463000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd64074660 a2=10 a3=3 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.463000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.463000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.463000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd64074600 a2=10 a3=3 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.463000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.463000 audit[3709]: AVC avc: denied { bpf } for pid=3709 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:44:09.463000 audit[3709]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd64074600 a2=10 a3=7 items=0 ppid=3639 pid=3709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.463000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:44:09.471000 audit: BPF prog-id=26 op=UNLOAD Nov 1 00:44:09.569000 audit[3732]: NETFILTER_CFG table=nat:107 family=2 entries=15 op=nft_register_chain pid=3732 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:09.569000 audit[3732]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffdd715df90 a2=0 a3=7ffdd715df7c items=0 ppid=3639 pid=3732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.569000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:44:09.571000 audit[3736]: NETFILTER_CFG table=mangle:108 family=2 entries=16 op=nft_register_chain pid=3736 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:09.571000 audit[3736]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffd4913a1a0 a2=0 a3=7ffd4913a18c items=0 ppid=3639 pid=3736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.571000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:44:09.588000 audit[3733]: NETFILTER_CFG table=raw:109 family=2 entries=21 op=nft_register_chain pid=3733 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:09.588000 audit[3733]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffdc2281210 a2=0 a3=7ffdc22811fc items=0 ppid=3639 pid=3733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.588000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:44:09.590000 audit[3735]: NETFILTER_CFG table=filter:110 family=2 entries=94 op=nft_register_chain pid=3735 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:09.590000 audit[3735]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffe084c9070 a2=0 a3=7ffe084c905c items=0 ppid=3639 pid=3735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:09.590000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:44:09.816671 kubelet[2193]: E1101 00:44:09.816385 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768cf9cc9d-2cqdw" podUID="bae1cc02-5d35-4e6c-8d44-6ad010de9d41" Nov 1 00:44:09.937352 systemd-networkd[1062]: cali1045c9cbfbc: Gained IPv6LL Nov 1 00:44:10.641427 systemd-networkd[1062]: vxlan.calico: Gained IPv6LL Nov 1 00:44:11.394642 env[1301]: time="2025-11-01T00:44:11.394572871Z" level=info msg="StopPodSandbox for \"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3\"" Nov 1 00:44:11.533226 env[1301]: 2025-11-01 00:44:11.474 [INFO][3761] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" Nov 1 00:44:11.533226 env[1301]: 2025-11-01 00:44:11.474 [INFO][3761] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" iface="eth0" netns="/var/run/netns/cni-1c308e0b-5bcd-da7e-e6be-705e98ec0b68" Nov 1 00:44:11.533226 env[1301]: 2025-11-01 00:44:11.475 [INFO][3761] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" iface="eth0" netns="/var/run/netns/cni-1c308e0b-5bcd-da7e-e6be-705e98ec0b68" Nov 1 00:44:11.533226 env[1301]: 2025-11-01 00:44:11.476 [INFO][3761] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" iface="eth0" netns="/var/run/netns/cni-1c308e0b-5bcd-da7e-e6be-705e98ec0b68" Nov 1 00:44:11.533226 env[1301]: 2025-11-01 00:44:11.476 [INFO][3761] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" Nov 1 00:44:11.533226 env[1301]: 2025-11-01 00:44:11.476 [INFO][3761] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" Nov 1 00:44:11.533226 env[1301]: 2025-11-01 00:44:11.517 [INFO][3769] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" HandleID="k8s-pod-network.6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0" Nov 1 00:44:11.533226 env[1301]: 2025-11-01 00:44:11.518 [INFO][3769] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:11.533226 env[1301]: 2025-11-01 00:44:11.518 [INFO][3769] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:11.533226 env[1301]: 2025-11-01 00:44:11.526 [WARNING][3769] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" HandleID="k8s-pod-network.6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0" Nov 1 00:44:11.533226 env[1301]: 2025-11-01 00:44:11.526 [INFO][3769] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" HandleID="k8s-pod-network.6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0" Nov 1 00:44:11.533226 env[1301]: 2025-11-01 00:44:11.529 [INFO][3769] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:11.533226 env[1301]: 2025-11-01 00:44:11.531 [INFO][3761] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" Nov 1 00:44:11.538548 env[1301]: time="2025-11-01T00:44:11.538399621Z" level=info msg="TearDown network for sandbox \"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3\" successfully" Nov 1 00:44:11.539007 env[1301]: time="2025-11-01T00:44:11.538932764Z" level=info msg="StopPodSandbox for \"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3\" returns successfully" Nov 1 00:44:11.541301 env[1301]: time="2025-11-01T00:44:11.541244126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xt7wl,Uid:bb5676df-eb26-4a3d-9a39-dc277ac29b28,Namespace:calico-system,Attempt:1,}" Nov 1 00:44:11.543817 systemd[1]: run-netns-cni\x2d1c308e0b\x2d5bcd\x2dda7e\x2de6be\x2d705e98ec0b68.mount: Deactivated successfully. Nov 1 00:44:11.724844 systemd-networkd[1062]: calia9aba9247e4: Link UP Nov 1 00:44:11.743761 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:44:11.744305 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia9aba9247e4: link becomes ready Nov 1 00:44:11.747528 systemd-networkd[1062]: calia9aba9247e4: Gained carrier Nov 1 00:44:11.769010 env[1301]: 2025-11-01 00:44:11.624 [INFO][3775] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0 goldmane-666569f655- calico-system bb5676df-eb26-4a3d-9a39-dc277ac29b28 973 0 2025-11-01 00:43:42 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762 goldmane-666569f655-xt7wl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia9aba9247e4 [] [] }} ContainerID="d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4" Namespace="calico-system" Pod="goldmane-666569f655-xt7wl" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-" Nov 1 00:44:11.769010 env[1301]: 2025-11-01 00:44:11.624 [INFO][3775] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4" Namespace="calico-system" Pod="goldmane-666569f655-xt7wl" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0" Nov 1 00:44:11.769010 env[1301]: 2025-11-01 00:44:11.663 [INFO][3788] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4" HandleID="k8s-pod-network.d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0" Nov 1 00:44:11.769010 env[1301]: 2025-11-01 00:44:11.663 [INFO][3788] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4" HandleID="k8s-pod-network.d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", "pod":"goldmane-666569f655-xt7wl", "timestamp":"2025-11-01 00:44:11.663603929 +0000 UTC"}, Hostname:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:44:11.769010 env[1301]: 2025-11-01 00:44:11.663 [INFO][3788] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:11.769010 env[1301]: 2025-11-01 00:44:11.664 [INFO][3788] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:11.769010 env[1301]: 2025-11-01 00:44:11.664 [INFO][3788] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' Nov 1 00:44:11.769010 env[1301]: 2025-11-01 00:44:11.675 [INFO][3788] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:11.769010 env[1301]: 2025-11-01 00:44:11.683 [INFO][3788] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:11.769010 env[1301]: 2025-11-01 00:44:11.689 [INFO][3788] ipam/ipam.go 511: Trying affinity for 192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:11.769010 env[1301]: 2025-11-01 00:44:11.691 [INFO][3788] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:11.769010 env[1301]: 2025-11-01 00:44:11.695 [INFO][3788] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:11.769010 env[1301]: 2025-11-01 00:44:11.695 [INFO][3788] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:11.769010 env[1301]: 2025-11-01 00:44:11.697 [INFO][3788] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4 Nov 1 00:44:11.769010 env[1301]: 2025-11-01 00:44:11.702 [INFO][3788] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:11.769010 env[1301]: 2025-11-01 00:44:11.712 [INFO][3788] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.66/26] block=192.168.97.64/26 handle="k8s-pod-network.d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:11.769010 env[1301]: 2025-11-01 00:44:11.712 [INFO][3788] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.66/26] handle="k8s-pod-network.d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:11.769010 env[1301]: 2025-11-01 00:44:11.713 [INFO][3788] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:11.769010 env[1301]: 2025-11-01 00:44:11.713 [INFO][3788] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.66/26] IPv6=[] ContainerID="d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4" HandleID="k8s-pod-network.d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0" Nov 1 00:44:11.770356 env[1301]: 2025-11-01 00:44:11.716 [INFO][3775] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4" Namespace="calico-system" Pod="goldmane-666569f655-xt7wl" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"bb5676df-eb26-4a3d-9a39-dc277ac29b28", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"", Pod:"goldmane-666569f655-xt7wl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia9aba9247e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:11.770356 env[1301]: 2025-11-01 00:44:11.716 [INFO][3775] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.66/32] ContainerID="d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4" Namespace="calico-system" Pod="goldmane-666569f655-xt7wl" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0" Nov 1 00:44:11.770356 env[1301]: 2025-11-01 00:44:11.716 [INFO][3775] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9aba9247e4 ContainerID="d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4" Namespace="calico-system" Pod="goldmane-666569f655-xt7wl" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0" Nov 1 00:44:11.770356 env[1301]: 2025-11-01 00:44:11.748 [INFO][3775] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4" Namespace="calico-system" Pod="goldmane-666569f655-xt7wl" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0" Nov 1 00:44:11.770356 env[1301]: 2025-11-01 00:44:11.749 [INFO][3775] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4" Namespace="calico-system" Pod="goldmane-666569f655-xt7wl" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"bb5676df-eb26-4a3d-9a39-dc277ac29b28", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4", Pod:"goldmane-666569f655-xt7wl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia9aba9247e4", MAC:"1e:56:93:81:28:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:11.770356 env[1301]: 2025-11-01 00:44:11.765 [INFO][3775] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4" Namespace="calico-system" Pod="goldmane-666569f655-xt7wl" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0" Nov 1 00:44:11.794052 env[1301]: time="2025-11-01T00:44:11.793617673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:11.794052 env[1301]: time="2025-11-01T00:44:11.793706463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:11.794052 env[1301]: time="2025-11-01T00:44:11.793733635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:11.794646 env[1301]: time="2025-11-01T00:44:11.794552491Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4 pid=3811 runtime=io.containerd.runc.v2 Nov 1 00:44:11.852595 kernel: kauditd_printk_skb: 565 callbacks suppressed Nov 1 00:44:11.852788 kernel: audit: type=1325 audit(1761957851.828:398): table=filter:111 family=2 entries=44 op=nft_register_chain pid=3824 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:11.828000 audit[3824]: NETFILTER_CFG table=filter:111 family=2 entries=44 op=nft_register_chain pid=3824 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:11.828000 audit[3824]: SYSCALL arch=c000003e syscall=46 success=yes exit=25180 a0=3 a1=7ffe9dcfdb50 a2=0 a3=7ffe9dcfdb3c items=0 ppid=3639 pid=3824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:11.887201 kernel: audit: type=1300 audit(1761957851.828:398): arch=c000003e syscall=46 success=yes exit=25180 a0=3 a1=7ffe9dcfdb50 a2=0 a3=7ffe9dcfdb3c items=0 ppid=3639 pid=3824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:11.828000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:44:11.908326 kernel: audit: type=1327 audit(1761957851.828:398): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:44:11.954393 env[1301]: time="2025-11-01T00:44:11.954326659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xt7wl,Uid:bb5676df-eb26-4a3d-9a39-dc277ac29b28,Namespace:calico-system,Attempt:1,} returns sandbox id \"d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4\"" Nov 1 00:44:11.958058 env[1301]: time="2025-11-01T00:44:11.958001217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:44:12.154480 env[1301]: time="2025-11-01T00:44:12.154377995Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:12.156136 env[1301]: time="2025-11-01T00:44:12.156042961Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:44:12.157122 kubelet[2193]: E1101 00:44:12.156678 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:44:12.157122 kubelet[2193]: E1101 00:44:12.156759 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:44:12.157122 kubelet[2193]: E1101 00:44:12.157015 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qm7px,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xt7wl_calico-system(bb5676df-eb26-4a3d-9a39-dc277ac29b28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:12.158347 kubelet[2193]: E1101 00:44:12.158284 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xt7wl" podUID="bb5676df-eb26-4a3d-9a39-dc277ac29b28" Nov 1 00:44:12.390882 env[1301]: time="2025-11-01T00:44:12.390809075Z" level=info msg="StopPodSandbox for \"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c\"" Nov 1 00:44:12.391342 env[1301]: time="2025-11-01T00:44:12.390806676Z" level=info msg="StopPodSandbox for \"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112\"" Nov 1 00:44:12.543882 systemd[1]: run-containerd-runc-k8s.io-d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4-runc.Lqdh7X.mount: Deactivated successfully. Nov 1 00:44:12.591381 env[1301]: 2025-11-01 00:44:12.499 [INFO][3866] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" Nov 1 00:44:12.591381 env[1301]: 2025-11-01 00:44:12.499 [INFO][3866] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" iface="eth0" netns="/var/run/netns/cni-ea634ae6-ef28-e140-3b47-29570dcdee35" Nov 1 00:44:12.591381 env[1301]: 2025-11-01 00:44:12.501 [INFO][3866] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" iface="eth0" netns="/var/run/netns/cni-ea634ae6-ef28-e140-3b47-29570dcdee35" Nov 1 00:44:12.591381 env[1301]: 2025-11-01 00:44:12.502 [INFO][3866] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" iface="eth0" netns="/var/run/netns/cni-ea634ae6-ef28-e140-3b47-29570dcdee35" Nov 1 00:44:12.591381 env[1301]: 2025-11-01 00:44:12.502 [INFO][3866] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" Nov 1 00:44:12.591381 env[1301]: 2025-11-01 00:44:12.502 [INFO][3866] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" Nov 1 00:44:12.591381 env[1301]: 2025-11-01 00:44:12.571 [INFO][3879] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" HandleID="k8s-pod-network.8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0" Nov 1 00:44:12.591381 env[1301]: 2025-11-01 00:44:12.571 [INFO][3879] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:12.591381 env[1301]: 2025-11-01 00:44:12.571 [INFO][3879] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:12.591381 env[1301]: 2025-11-01 00:44:12.583 [WARNING][3879] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" HandleID="k8s-pod-network.8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0" Nov 1 00:44:12.591381 env[1301]: 2025-11-01 00:44:12.583 [INFO][3879] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" HandleID="k8s-pod-network.8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0" Nov 1 00:44:12.591381 env[1301]: 2025-11-01 00:44:12.586 [INFO][3879] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:12.591381 env[1301]: 2025-11-01 00:44:12.589 [INFO][3866] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" Nov 1 00:44:12.598495 systemd[1]: run-netns-cni\x2dea634ae6\x2def28\x2de140\x2d3b47\x2d29570dcdee35.mount: Deactivated successfully. Nov 1 00:44:12.600950 env[1301]: time="2025-11-01T00:44:12.600879059Z" level=info msg="TearDown network for sandbox \"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c\" successfully" Nov 1 00:44:12.601264 env[1301]: time="2025-11-01T00:44:12.601152876Z" level=info msg="StopPodSandbox for \"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c\" returns successfully" Nov 1 00:44:12.602579 env[1301]: time="2025-11-01T00:44:12.602536836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fvm9x,Uid:1db94968-800e-4bd7-88c1-2551a090e4ab,Namespace:calico-system,Attempt:1,}" Nov 1 00:44:12.658236 env[1301]: 2025-11-01 00:44:12.514 [INFO][3867] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" Nov 1 00:44:12.658236 env[1301]: 2025-11-01 00:44:12.519 [INFO][3867] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" iface="eth0" netns="/var/run/netns/cni-1ab4fa61-5730-dfe4-2bb6-ac3e4aade005" Nov 1 00:44:12.658236 env[1301]: 2025-11-01 00:44:12.519 [INFO][3867] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" iface="eth0" netns="/var/run/netns/cni-1ab4fa61-5730-dfe4-2bb6-ac3e4aade005" Nov 1 00:44:12.658236 env[1301]: 2025-11-01 00:44:12.520 [INFO][3867] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" iface="eth0" netns="/var/run/netns/cni-1ab4fa61-5730-dfe4-2bb6-ac3e4aade005" Nov 1 00:44:12.658236 env[1301]: 2025-11-01 00:44:12.520 [INFO][3867] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" Nov 1 00:44:12.658236 env[1301]: 2025-11-01 00:44:12.520 [INFO][3867] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" Nov 1 00:44:12.658236 env[1301]: 2025-11-01 00:44:12.613 [INFO][3885] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" HandleID="k8s-pod-network.741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0" Nov 1 00:44:12.658236 env[1301]: 2025-11-01 00:44:12.614 [INFO][3885] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:12.658236 env[1301]: 2025-11-01 00:44:12.614 [INFO][3885] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:12.658236 env[1301]: 2025-11-01 00:44:12.647 [WARNING][3885] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" HandleID="k8s-pod-network.741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0" Nov 1 00:44:12.658236 env[1301]: 2025-11-01 00:44:12.648 [INFO][3885] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" HandleID="k8s-pod-network.741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0" Nov 1 00:44:12.658236 env[1301]: 2025-11-01 00:44:12.651 [INFO][3885] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:12.658236 env[1301]: 2025-11-01 00:44:12.653 [INFO][3867] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" Nov 1 00:44:12.665415 systemd[1]: run-netns-cni\x2d1ab4fa61\x2d5730\x2ddfe4\x2d2bb6\x2dac3e4aade005.mount: Deactivated successfully. Nov 1 00:44:12.669364 env[1301]: time="2025-11-01T00:44:12.669292837Z" level=info msg="TearDown network for sandbox \"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112\" successfully" Nov 1 00:44:12.669710 env[1301]: time="2025-11-01T00:44:12.669659122Z" level=info msg="StopPodSandbox for \"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112\" returns successfully" Nov 1 00:44:12.671564 env[1301]: time="2025-11-01T00:44:12.671516952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699db95d94-lmk4w,Uid:74931096-7bc0-4134-a8b4-61ec9bf5e338,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:44:12.835091 kubelet[2193]: E1101 00:44:12.832927 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xt7wl" podUID="bb5676df-eb26-4a3d-9a39-dc277ac29b28" Nov 1 00:44:12.908000 audit[3932]: NETFILTER_CFG table=filter:112 family=2 entries=20 op=nft_register_rule pid=3932 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:12.927471 kernel: audit: type=1325 audit(1761957852.908:399): table=filter:112 family=2 entries=20 op=nft_register_rule pid=3932 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:12.974134 kernel: audit: type=1300 audit(1761957852.908:399): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff07f9c380 a2=0 a3=7fff07f9c36c items=0 ppid=2355 pid=3932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:12.974403 kernel: audit: type=1327 audit(1761957852.908:399): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:12.908000 audit[3932]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff07f9c380 a2=0 a3=7fff07f9c36c items=0 ppid=2355 pid=3932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:12.908000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:12.976768 systemd-networkd[1062]: calia9aba9247e4: Gained IPv6LL Nov 1 00:44:12.930000 audit[3932]: NETFILTER_CFG table=nat:113 family=2 entries=14 op=nft_register_rule pid=3932 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:13.008538 kernel: audit: type=1325 audit(1761957852.930:400): table=nat:113 family=2 entries=14 op=nft_register_rule pid=3932 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:13.052861 kernel: audit: type=1300 audit(1761957852.930:400): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff07f9c380 a2=0 a3=0 items=0 ppid=2355 pid=3932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:12.930000 audit[3932]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff07f9c380 a2=0 a3=0 items=0 ppid=2355 pid=3932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:13.057329 systemd-networkd[1062]: calie5efbfc4fa4: Link UP Nov 1 00:44:13.071688 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:44:13.071917 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie5efbfc4fa4: link becomes ready Nov 1 00:44:13.085012 systemd-networkd[1062]: calie5efbfc4fa4: Gained carrier Nov 1 00:44:13.104098 kernel: audit: type=1327 audit(1761957852.930:400): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:12.930000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:13.136396 env[1301]: 2025-11-01 00:44:12.736 [INFO][3892] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0 csi-node-driver- calico-system 1db94968-800e-4bd7-88c1-2551a090e4ab 984 0 2025-11-01 00:43:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762 csi-node-driver-fvm9x eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie5efbfc4fa4 [] [] }} ContainerID="955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b" Namespace="calico-system" Pod="csi-node-driver-fvm9x" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-" Nov 1 00:44:13.136396 env[1301]: 2025-11-01 00:44:12.737 [INFO][3892] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b" Namespace="calico-system" Pod="csi-node-driver-fvm9x" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0" Nov 1 00:44:13.136396 env[1301]: 2025-11-01 00:44:12.854 [INFO][3916] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b" HandleID="k8s-pod-network.955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0" Nov 1 00:44:13.136396 env[1301]: 2025-11-01 00:44:12.858 [INFO][3916] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b" HandleID="k8s-pod-network.955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd950), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", "pod":"csi-node-driver-fvm9x", "timestamp":"2025-11-01 00:44:12.854823169 +0000 UTC"}, Hostname:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:44:13.136396 env[1301]: 2025-11-01 00:44:12.865 [INFO][3916] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:13.136396 env[1301]: 2025-11-01 00:44:12.865 [INFO][3916] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:13.136396 env[1301]: 2025-11-01 00:44:12.865 [INFO][3916] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' Nov 1 00:44:13.136396 env[1301]: 2025-11-01 00:44:12.887 [INFO][3916] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:13.136396 env[1301]: 2025-11-01 00:44:12.896 [INFO][3916] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:13.136396 env[1301]: 2025-11-01 00:44:12.908 [INFO][3916] ipam/ipam.go 511: Trying affinity for 192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:13.136396 env[1301]: 2025-11-01 00:44:12.928 [INFO][3916] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:13.136396 env[1301]: 2025-11-01 00:44:12.935 [INFO][3916] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:13.136396 env[1301]: 2025-11-01 00:44:12.935 [INFO][3916] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:13.136396 env[1301]: 2025-11-01 00:44:12.937 [INFO][3916] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b Nov 1 00:44:13.136396 env[1301]: 2025-11-01 00:44:12.943 [INFO][3916] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:13.136396 env[1301]: 2025-11-01 00:44:12.984 [INFO][3916] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.67/26] block=192.168.97.64/26 handle="k8s-pod-network.955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:13.136396 env[1301]: 2025-11-01 00:44:12.985 [INFO][3916] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.67/26] handle="k8s-pod-network.955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:13.136396 env[1301]: 2025-11-01 00:44:12.985 [INFO][3916] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:13.136396 env[1301]: 2025-11-01 00:44:12.985 [INFO][3916] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.67/26] IPv6=[] ContainerID="955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b" HandleID="k8s-pod-network.955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0" Nov 1 00:44:13.137756 env[1301]: 2025-11-01 00:44:13.028 [INFO][3892] cni-plugin/k8s.go 418: Populated endpoint ContainerID="955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b" Namespace="calico-system" Pod="csi-node-driver-fvm9x" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1db94968-800e-4bd7-88c1-2551a090e4ab", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"", Pod:"csi-node-driver-fvm9x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie5efbfc4fa4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:13.137756 env[1301]: 2025-11-01 00:44:13.028 [INFO][3892] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.67/32] ContainerID="955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b" Namespace="calico-system" Pod="csi-node-driver-fvm9x" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0" Nov 1 00:44:13.137756 env[1301]: 2025-11-01 00:44:13.029 [INFO][3892] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie5efbfc4fa4 ContainerID="955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b" Namespace="calico-system" Pod="csi-node-driver-fvm9x" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0" Nov 1 00:44:13.137756 env[1301]: 2025-11-01 00:44:13.079 [INFO][3892] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b" Namespace="calico-system" Pod="csi-node-driver-fvm9x" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0" Nov 1 00:44:13.137756 env[1301]: 2025-11-01 00:44:13.097 [INFO][3892] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b" Namespace="calico-system" Pod="csi-node-driver-fvm9x" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1db94968-800e-4bd7-88c1-2551a090e4ab", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b", Pod:"csi-node-driver-fvm9x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie5efbfc4fa4", MAC:"62:e1:90:06:6d:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:13.137756 env[1301]: 2025-11-01 00:44:13.127 [INFO][3892] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b" Namespace="calico-system" Pod="csi-node-driver-fvm9x" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0" Nov 1 00:44:13.198364 env[1301]: time="2025-11-01T00:44:13.198266951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:13.198677 env[1301]: time="2025-11-01T00:44:13.198636688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:13.198848 env[1301]: time="2025-11-01T00:44:13.198814476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:13.199241 env[1301]: time="2025-11-01T00:44:13.199165008Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b pid=3951 runtime=io.containerd.runc.v2 Nov 1 00:44:13.181000 audit[3942]: NETFILTER_CFG table=filter:114 family=2 entries=40 op=nft_register_chain pid=3942 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:13.220217 kernel: audit: type=1325 audit(1761957853.181:401): table=filter:114 family=2 entries=40 op=nft_register_chain pid=3942 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:13.181000 audit[3942]: SYSCALL arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7ffefab98aa0 a2=0 a3=7ffefab98a8c items=0 ppid=3639 pid=3942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:13.181000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:44:13.233353 systemd-networkd[1062]: cali6a4beef0d70: Link UP Nov 1 00:44:13.242272 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6a4beef0d70: link becomes ready Nov 1 00:44:13.245996 systemd-networkd[1062]: cali6a4beef0d70: Gained carrier Nov 1 00:44:13.267679 env[1301]: 2025-11-01 00:44:12.813 [INFO][3905] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0 calico-apiserver-699db95d94- calico-apiserver 74931096-7bc0-4134-a8b4-61ec9bf5e338 985 0 2025-11-01 00:43:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:699db95d94 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762 calico-apiserver-699db95d94-lmk4w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6a4beef0d70 [] [] }} ContainerID="afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7" Namespace="calico-apiserver" Pod="calico-apiserver-699db95d94-lmk4w" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-" Nov 1 00:44:13.267679 env[1301]: 2025-11-01 00:44:12.814 [INFO][3905] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7" Namespace="calico-apiserver" Pod="calico-apiserver-699db95d94-lmk4w" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0" Nov 1 00:44:13.267679 env[1301]: 2025-11-01 00:44:13.077 [INFO][3926] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7" HandleID="k8s-pod-network.afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0" Nov 1 00:44:13.267679 env[1301]: 2025-11-01 00:44:13.107 [INFO][3926] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7" HandleID="k8s-pod-network.afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd530), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", "pod":"calico-apiserver-699db95d94-lmk4w", "timestamp":"2025-11-01 00:44:13.07739379 +0000 UTC"}, Hostname:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:44:13.267679 env[1301]: 2025-11-01 00:44:13.107 [INFO][3926] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:13.267679 env[1301]: 2025-11-01 00:44:13.108 [INFO][3926] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:13.267679 env[1301]: 2025-11-01 00:44:13.108 [INFO][3926] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' Nov 1 00:44:13.267679 env[1301]: 2025-11-01 00:44:13.142 [INFO][3926] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:13.267679 env[1301]: 2025-11-01 00:44:13.160 [INFO][3926] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:13.267679 env[1301]: 2025-11-01 00:44:13.169 [INFO][3926] ipam/ipam.go 511: Trying affinity for 192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:13.267679 env[1301]: 2025-11-01 00:44:13.172 [INFO][3926] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:13.267679 env[1301]: 2025-11-01 00:44:13.176 [INFO][3926] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:13.267679 env[1301]: 2025-11-01 00:44:13.176 [INFO][3926] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:13.267679 env[1301]: 2025-11-01 00:44:13.179 [INFO][3926] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7 Nov 1 00:44:13.267679 env[1301]: 2025-11-01 00:44:13.189 [INFO][3926] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:13.267679 env[1301]: 2025-11-01 00:44:13.201 [INFO][3926] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.68/26] block=192.168.97.64/26 handle="k8s-pod-network.afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:13.267679 env[1301]: 2025-11-01 00:44:13.202 [INFO][3926] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.68/26] handle="k8s-pod-network.afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:13.267679 env[1301]: 2025-11-01 00:44:13.202 [INFO][3926] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:13.267679 env[1301]: 2025-11-01 00:44:13.202 [INFO][3926] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.68/26] IPv6=[] ContainerID="afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7" HandleID="k8s-pod-network.afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0" Nov 1 00:44:13.272713 env[1301]: 2025-11-01 00:44:13.223 [INFO][3905] cni-plugin/k8s.go 418: Populated endpoint ContainerID="afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7" Namespace="calico-apiserver" Pod="calico-apiserver-699db95d94-lmk4w" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0", GenerateName:"calico-apiserver-699db95d94-", Namespace:"calico-apiserver", SelfLink:"", UID:"74931096-7bc0-4134-a8b4-61ec9bf5e338", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699db95d94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"", Pod:"calico-apiserver-699db95d94-lmk4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a4beef0d70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:13.272713 env[1301]: 2025-11-01 00:44:13.223 [INFO][3905] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.68/32] ContainerID="afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7" Namespace="calico-apiserver" Pod="calico-apiserver-699db95d94-lmk4w" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0" Nov 1 00:44:13.272713 env[1301]: 2025-11-01 00:44:13.223 [INFO][3905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a4beef0d70 ContainerID="afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7" Namespace="calico-apiserver" Pod="calico-apiserver-699db95d94-lmk4w" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0" Nov 1 00:44:13.272713 env[1301]: 2025-11-01 00:44:13.233 [INFO][3905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7" Namespace="calico-apiserver" Pod="calico-apiserver-699db95d94-lmk4w" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0" Nov 1 00:44:13.272713 env[1301]: 2025-11-01 00:44:13.247 [INFO][3905] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7" Namespace="calico-apiserver" Pod="calico-apiserver-699db95d94-lmk4w" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0", GenerateName:"calico-apiserver-699db95d94-", Namespace:"calico-apiserver", SelfLink:"", UID:"74931096-7bc0-4134-a8b4-61ec9bf5e338", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699db95d94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7", Pod:"calico-apiserver-699db95d94-lmk4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a4beef0d70", MAC:"c6:99:30:b5:07:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:13.272713 env[1301]: 2025-11-01 00:44:13.264 [INFO][3905] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7" Namespace="calico-apiserver" Pod="calico-apiserver-699db95d94-lmk4w" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0" Nov 1 00:44:13.306043 env[1301]: time="2025-11-01T00:44:13.305924357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:13.306330 env[1301]: time="2025-11-01T00:44:13.306001168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:13.306330 env[1301]: time="2025-11-01T00:44:13.306021039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:13.306681 env[1301]: time="2025-11-01T00:44:13.306619977Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7 pid=3988 runtime=io.containerd.runc.v2 Nov 1 00:44:13.345000 audit[4010]: NETFILTER_CFG table=filter:115 family=2 entries=58 op=nft_register_chain pid=4010 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:13.345000 audit[4010]: SYSCALL arch=c000003e syscall=46 success=yes exit=30584 a0=3 a1=7ffd07b12300 a2=0 a3=7ffd07b122ec items=0 ppid=3639 pid=4010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:13.345000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:44:13.408190 env[1301]: time="2025-11-01T00:44:13.403111101Z" level=info msg="StopPodSandbox for \"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a\"" Nov 1 00:44:13.409784 env[1301]: time="2025-11-01T00:44:13.405756482Z" level=info msg="StopPodSandbox for \"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773\"" Nov 1 00:44:13.410404 env[1301]: time="2025-11-01T00:44:13.405805266Z" level=info msg="StopPodSandbox for \"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052\"" Nov 1 00:44:13.410960 env[1301]: time="2025-11-01T00:44:13.405855733Z" level=info msg="StopPodSandbox for \"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3\"" Nov 1 00:44:13.428197 env[1301]: time="2025-11-01T00:44:13.428114166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fvm9x,Uid:1db94968-800e-4bd7-88c1-2551a090e4ab,Namespace:calico-system,Attempt:1,} returns sandbox id \"955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b\"" Nov 1 00:44:13.436940 env[1301]: time="2025-11-01T00:44:13.436847855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:44:13.467845 env[1301]: time="2025-11-01T00:44:13.467767113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699db95d94-lmk4w,Uid:74931096-7bc0-4134-a8b4-61ec9bf5e338,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7\"" Nov 1 00:44:13.637745 env[1301]: time="2025-11-01T00:44:13.636935894Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:13.640119 env[1301]: time="2025-11-01T00:44:13.640020946Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:44:13.641215 kubelet[2193]: E1101 00:44:13.640705 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:44:13.641215 kubelet[2193]: E1101 00:44:13.640800 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:44:13.642587 kubelet[2193]: E1101 00:44:13.641934 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2rjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fvm9x_calico-system(1db94968-800e-4bd7-88c1-2551a090e4ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:13.644026 env[1301]: time="2025-11-01T00:44:13.643980886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:44:13.736713 env[1301]: 2025-11-01 00:44:13.591 [INFO][4071] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" Nov 1 00:44:13.736713 env[1301]: 2025-11-01 00:44:13.592 [INFO][4071] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" iface="eth0" netns="/var/run/netns/cni-81c0b4d2-41d6-6bcf-69f8-bc7d411730fc" Nov 1 00:44:13.736713 env[1301]: 2025-11-01 00:44:13.592 [INFO][4071] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" iface="eth0" netns="/var/run/netns/cni-81c0b4d2-41d6-6bcf-69f8-bc7d411730fc" Nov 1 00:44:13.736713 env[1301]: 2025-11-01 00:44:13.592 [INFO][4071] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" iface="eth0" netns="/var/run/netns/cni-81c0b4d2-41d6-6bcf-69f8-bc7d411730fc" Nov 1 00:44:13.736713 env[1301]: 2025-11-01 00:44:13.593 [INFO][4071] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" Nov 1 00:44:13.736713 env[1301]: 2025-11-01 00:44:13.593 [INFO][4071] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" Nov 1 00:44:13.736713 env[1301]: 2025-11-01 00:44:13.707 [INFO][4097] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" HandleID="k8s-pod-network.6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0" Nov 1 00:44:13.736713 env[1301]: 2025-11-01 00:44:13.709 [INFO][4097] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:13.736713 env[1301]: 2025-11-01 00:44:13.709 [INFO][4097] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:13.736713 env[1301]: 2025-11-01 00:44:13.728 [WARNING][4097] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" HandleID="k8s-pod-network.6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0" Nov 1 00:44:13.736713 env[1301]: 2025-11-01 00:44:13.728 [INFO][4097] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" HandleID="k8s-pod-network.6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0" Nov 1 00:44:13.736713 env[1301]: 2025-11-01 00:44:13.731 [INFO][4097] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:13.736713 env[1301]: 2025-11-01 00:44:13.733 [INFO][4071] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" Nov 1 00:44:13.743018 systemd[1]: run-netns-cni\x2d81c0b4d2\x2d41d6\x2d6bcf\x2d69f8\x2dbc7d411730fc.mount: Deactivated successfully. Nov 1 00:44:13.745071 env[1301]: time="2025-11-01T00:44:13.745001296Z" level=info msg="TearDown network for sandbox \"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773\" successfully" Nov 1 00:44:13.745413 env[1301]: time="2025-11-01T00:44:13.745376021Z" level=info msg="StopPodSandbox for \"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773\" returns successfully" Nov 1 00:44:13.746630 env[1301]: time="2025-11-01T00:44:13.746589252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4bhnp,Uid:a4e6e4e0-2f67-459b-8ed4-30190e515a8d,Namespace:kube-system,Attempt:1,}" Nov 1 00:44:13.852475 kubelet[2193]: E1101 00:44:13.850799 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xt7wl" podUID="bb5676df-eb26-4a3d-9a39-dc277ac29b28" Nov 1 00:44:13.855472 env[1301]: time="2025-11-01T00:44:13.855389131Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:13.857926 env[1301]: time="2025-11-01T00:44:13.857808781Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:44:13.859305 kubelet[2193]: E1101 00:44:13.858527 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:13.859305 kubelet[2193]: E1101 00:44:13.858623 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:13.859305 kubelet[2193]: E1101 00:44:13.859141 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lb57m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-699db95d94-lmk4w_calico-apiserver(74931096-7bc0-4134-a8b4-61ec9bf5e338): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:13.860800 kubelet[2193]: E1101 00:44:13.860563 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699db95d94-lmk4w" podUID="74931096-7bc0-4134-a8b4-61ec9bf5e338" Nov 1 00:44:13.861603 env[1301]: time="2025-11-01T00:44:13.861559794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:44:13.896577 env[1301]: 2025-11-01 00:44:13.722 [INFO][4083] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" Nov 1 00:44:13.896577 env[1301]: 2025-11-01 00:44:13.722 [INFO][4083] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" iface="eth0" netns="/var/run/netns/cni-98ec26fe-9d80-fd58-73fe-44fdf99d270a" Nov 1 00:44:13.896577 env[1301]: 2025-11-01 00:44:13.722 [INFO][4083] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" iface="eth0" netns="/var/run/netns/cni-98ec26fe-9d80-fd58-73fe-44fdf99d270a" Nov 1 00:44:13.896577 env[1301]: 2025-11-01 00:44:13.724 [INFO][4083] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" iface="eth0" netns="/var/run/netns/cni-98ec26fe-9d80-fd58-73fe-44fdf99d270a" Nov 1 00:44:13.896577 env[1301]: 2025-11-01 00:44:13.724 [INFO][4083] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" Nov 1 00:44:13.896577 env[1301]: 2025-11-01 00:44:13.724 [INFO][4083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" Nov 1 00:44:13.896577 env[1301]: 2025-11-01 00:44:13.831 [INFO][4112] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" HandleID="k8s-pod-network.e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0" Nov 1 00:44:13.896577 env[1301]: 2025-11-01 00:44:13.833 [INFO][4112] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:13.896577 env[1301]: 2025-11-01 00:44:13.834 [INFO][4112] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:13.896577 env[1301]: 2025-11-01 00:44:13.848 [WARNING][4112] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" HandleID="k8s-pod-network.e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0" Nov 1 00:44:13.896577 env[1301]: 2025-11-01 00:44:13.848 [INFO][4112] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" HandleID="k8s-pod-network.e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0" Nov 1 00:44:13.896577 env[1301]: 2025-11-01 00:44:13.866 [INFO][4112] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:13.896577 env[1301]: 2025-11-01 00:44:13.893 [INFO][4083] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" Nov 1 00:44:13.904296 env[1301]: time="2025-11-01T00:44:13.904130028Z" level=info msg="TearDown network for sandbox \"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3\" successfully" Nov 1 00:44:13.904608 env[1301]: time="2025-11-01T00:44:13.904555045Z" level=info msg="StopPodSandbox for \"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3\" returns successfully" Nov 1 00:44:13.907272 systemd[1]: run-netns-cni\x2d98ec26fe\x2d9d80\x2dfd58\x2d73fe\x2d44fdf99d270a.mount: Deactivated successfully. Nov 1 00:44:13.911489 env[1301]: time="2025-11-01T00:44:13.911432632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f99bc94f9-dqr4b,Uid:34b32444-f031-47a6-89b0-97775432ade7,Namespace:calico-system,Attempt:1,}" Nov 1 00:44:13.925780 env[1301]: 2025-11-01 00:44:13.721 [INFO][4074] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" Nov 1 00:44:13.925780 env[1301]: 2025-11-01 00:44:13.721 [INFO][4074] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" iface="eth0" netns="/var/run/netns/cni-519349a9-1d73-1e22-8341-ae531ea06db7" Nov 1 00:44:13.925780 env[1301]: 2025-11-01 00:44:13.723 [INFO][4074] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" iface="eth0" netns="/var/run/netns/cni-519349a9-1d73-1e22-8341-ae531ea06db7" Nov 1 00:44:13.925780 env[1301]: 2025-11-01 00:44:13.723 [INFO][4074] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" iface="eth0" netns="/var/run/netns/cni-519349a9-1d73-1e22-8341-ae531ea06db7" Nov 1 00:44:13.925780 env[1301]: 2025-11-01 00:44:13.723 [INFO][4074] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" Nov 1 00:44:13.925780 env[1301]: 2025-11-01 00:44:13.723 [INFO][4074] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" Nov 1 00:44:13.925780 env[1301]: 2025-11-01 00:44:13.889 [INFO][4114] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" HandleID="k8s-pod-network.22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0" Nov 1 00:44:13.925780 env[1301]: 2025-11-01 00:44:13.891 [INFO][4114] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:13.925780 env[1301]: 2025-11-01 00:44:13.891 [INFO][4114] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:13.925780 env[1301]: 2025-11-01 00:44:13.917 [WARNING][4114] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" HandleID="k8s-pod-network.22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0" Nov 1 00:44:13.925780 env[1301]: 2025-11-01 00:44:13.917 [INFO][4114] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" HandleID="k8s-pod-network.22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0" Nov 1 00:44:13.925780 env[1301]: 2025-11-01 00:44:13.922 [INFO][4114] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:13.925780 env[1301]: 2025-11-01 00:44:13.923 [INFO][4074] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" Nov 1 00:44:13.927844 env[1301]: time="2025-11-01T00:44:13.927784346Z" level=info msg="TearDown network for sandbox \"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a\" successfully" Nov 1 00:44:13.928028 env[1301]: time="2025-11-01T00:44:13.927998751Z" level=info msg="StopPodSandbox for \"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a\" returns successfully" Nov 1 00:44:13.929481 env[1301]: time="2025-11-01T00:44:13.929439180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7xkbs,Uid:b2c78f36-7235-48aa-baae-2bd9c8a78b81,Namespace:kube-system,Attempt:1,}" Nov 1 00:44:14.008227 env[1301]: 2025-11-01 00:44:13.723 [INFO][4088] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" Nov 1 00:44:14.008227 env[1301]: 2025-11-01 00:44:13.723 [INFO][4088] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" iface="eth0" netns="/var/run/netns/cni-5929d4c4-ed7e-ef42-7cb5-187f33961a3e" Nov 1 00:44:14.008227 env[1301]: 2025-11-01 00:44:13.724 [INFO][4088] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" iface="eth0" netns="/var/run/netns/cni-5929d4c4-ed7e-ef42-7cb5-187f33961a3e" Nov 1 00:44:14.008227 env[1301]: 2025-11-01 00:44:13.724 [INFO][4088] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" iface="eth0" netns="/var/run/netns/cni-5929d4c4-ed7e-ef42-7cb5-187f33961a3e" Nov 1 00:44:14.008227 env[1301]: 2025-11-01 00:44:13.724 [INFO][4088] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" Nov 1 00:44:14.008227 env[1301]: 2025-11-01 00:44:13.724 [INFO][4088] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" Nov 1 00:44:14.008227 env[1301]: 2025-11-01 00:44:13.971 [INFO][4113] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" HandleID="k8s-pod-network.7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0" Nov 1 00:44:14.008227 env[1301]: 2025-11-01 00:44:13.974 [INFO][4113] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:14.008227 env[1301]: 2025-11-01 00:44:13.974 [INFO][4113] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:14.008227 env[1301]: 2025-11-01 00:44:13.989 [WARNING][4113] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" HandleID="k8s-pod-network.7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0" Nov 1 00:44:14.008227 env[1301]: 2025-11-01 00:44:13.989 [INFO][4113] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" HandleID="k8s-pod-network.7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0" Nov 1 00:44:14.008227 env[1301]: 2025-11-01 00:44:13.993 [INFO][4113] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:14.008227 env[1301]: 2025-11-01 00:44:14.000 [INFO][4088] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" Nov 1 00:44:14.009670 env[1301]: time="2025-11-01T00:44:14.009574619Z" level=info msg="TearDown network for sandbox \"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052\" successfully" Nov 1 00:44:14.009889 env[1301]: time="2025-11-01T00:44:14.009853004Z" level=info msg="StopPodSandbox for \"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052\" returns successfully" Nov 1 00:44:14.011754 env[1301]: time="2025-11-01T00:44:14.011707259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699db95d94-pf9ws,Uid:4e92eb00-99ac-4f51-a076-ab8bc59ed374,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:44:14.088066 env[1301]: time="2025-11-01T00:44:14.087973279Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:14.097209 env[1301]: time="2025-11-01T00:44:14.097092691Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:44:14.102452 kubelet[2193]: E1101 00:44:14.097589 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:44:14.102452 kubelet[2193]: E1101 00:44:14.097711 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:44:14.102452 kubelet[2193]: E1101 00:44:14.097974 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2rjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fvm9x_calico-system(1db94968-800e-4bd7-88c1-2551a090e4ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:14.102452 kubelet[2193]: E1101 00:44:14.099623 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fvm9x" podUID="1db94968-800e-4bd7-88c1-2551a090e4ab" Nov 1 00:44:14.164317 systemd-networkd[1062]: calie5efbfc4fa4: Gained IPv6LL Nov 1 00:44:14.397209 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:44:14.407082 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif4af11b6ac1: link becomes ready Nov 1 00:44:14.406312 systemd-networkd[1062]: calif4af11b6ac1: Link UP Nov 1 00:44:14.406648 systemd-networkd[1062]: calif4af11b6ac1: Gained carrier Nov 1 00:44:14.435397 env[1301]: 2025-11-01 00:44:13.987 [INFO][4126] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0 coredns-668d6bf9bc- kube-system a4e6e4e0-2f67-459b-8ed4-30190e515a8d 1004 0 2025-11-01 00:43:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762 coredns-668d6bf9bc-4bhnp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif4af11b6ac1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3" Namespace="kube-system" Pod="coredns-668d6bf9bc-4bhnp" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-" Nov 1 00:44:14.435397 env[1301]: 2025-11-01 00:44:13.987 [INFO][4126] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3" Namespace="kube-system" Pod="coredns-668d6bf9bc-4bhnp" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0" Nov 1 00:44:14.435397 env[1301]: 2025-11-01 00:44:14.287 [INFO][4157] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3" HandleID="k8s-pod-network.4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0" Nov 1 00:44:14.435397 env[1301]: 2025-11-01 00:44:14.288 [INFO][4157] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3" HandleID="k8s-pod-network.4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cc1a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", "pod":"coredns-668d6bf9bc-4bhnp", "timestamp":"2025-11-01 00:44:14.287602961 +0000 UTC"}, Hostname:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:44:14.435397 env[1301]: 2025-11-01 00:44:14.288 [INFO][4157] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:14.435397 env[1301]: 2025-11-01 00:44:14.288 [INFO][4157] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:14.435397 env[1301]: 2025-11-01 00:44:14.288 [INFO][4157] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' Nov 1 00:44:14.435397 env[1301]: 2025-11-01 00:44:14.318 [INFO][4157] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.435397 env[1301]: 2025-11-01 00:44:14.326 [INFO][4157] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.435397 env[1301]: 2025-11-01 00:44:14.337 [INFO][4157] ipam/ipam.go 511: Trying affinity for 192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.435397 env[1301]: 2025-11-01 00:44:14.342 [INFO][4157] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.435397 env[1301]: 2025-11-01 00:44:14.346 [INFO][4157] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.435397 env[1301]: 2025-11-01 00:44:14.346 [INFO][4157] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.435397 env[1301]: 2025-11-01 00:44:14.349 [INFO][4157] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3 Nov 1 00:44:14.435397 env[1301]: 2025-11-01 00:44:14.357 [INFO][4157] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.435397 env[1301]: 2025-11-01 00:44:14.371 [INFO][4157] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.69/26] block=192.168.97.64/26 handle="k8s-pod-network.4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.435397 env[1301]: 2025-11-01 00:44:14.372 [INFO][4157] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.69/26] handle="k8s-pod-network.4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.435397 env[1301]: 2025-11-01 00:44:14.372 [INFO][4157] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:14.435397 env[1301]: 2025-11-01 00:44:14.372 [INFO][4157] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.69/26] IPv6=[] ContainerID="4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3" HandleID="k8s-pod-network.4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0" Nov 1 00:44:14.436734 env[1301]: 2025-11-01 00:44:14.382 [INFO][4126] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3" Namespace="kube-system" Pod="coredns-668d6bf9bc-4bhnp" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a4e6e4e0-2f67-459b-8ed4-30190e515a8d", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"", Pod:"coredns-668d6bf9bc-4bhnp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif4af11b6ac1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:14.436734 env[1301]: 2025-11-01 00:44:14.382 [INFO][4126] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.69/32] ContainerID="4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3" Namespace="kube-system" Pod="coredns-668d6bf9bc-4bhnp" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0" Nov 1 00:44:14.436734 env[1301]: 2025-11-01 00:44:14.382 [INFO][4126] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4af11b6ac1 ContainerID="4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3" Namespace="kube-system" Pod="coredns-668d6bf9bc-4bhnp" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0" Nov 1 00:44:14.436734 env[1301]: 2025-11-01 00:44:14.406 [INFO][4126] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3" Namespace="kube-system" Pod="coredns-668d6bf9bc-4bhnp" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0" Nov 1 00:44:14.436734 env[1301]: 2025-11-01 00:44:14.409 [INFO][4126] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3" Namespace="kube-system" Pod="coredns-668d6bf9bc-4bhnp" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a4e6e4e0-2f67-459b-8ed4-30190e515a8d", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3", Pod:"coredns-668d6bf9bc-4bhnp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif4af11b6ac1", MAC:"46:b2:1f:6e:eb:f1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:14.436734 env[1301]: 2025-11-01 00:44:14.429 [INFO][4126] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3" Namespace="kube-system" Pod="coredns-668d6bf9bc-4bhnp" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0" Nov 1 00:44:14.468000 audit[4214]: NETFILTER_CFG table=filter:116 family=2 entries=54 op=nft_register_chain pid=4214 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:14.468000 audit[4214]: SYSCALL arch=c000003e syscall=46 success=yes exit=26116 a0=3 a1=7ffea018bda0 a2=0 a3=7ffea018bd8c items=0 ppid=3639 pid=4214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:14.468000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:44:14.526669 env[1301]: time="2025-11-01T00:44:14.526299163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:14.527205 env[1301]: time="2025-11-01T00:44:14.527109215Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:14.527695 env[1301]: time="2025-11-01T00:44:14.527648763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:14.529424 env[1301]: time="2025-11-01T00:44:14.529362565Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3 pid=4224 runtime=io.containerd.runc.v2 Nov 1 00:44:14.546059 systemd-networkd[1062]: cali6a4beef0d70: Gained IPv6LL Nov 1 00:44:14.567981 systemd[1]: run-netns-cni\x2d5929d4c4\x2ded7e\x2def42\x2d7cb5\x2d187f33961a3e.mount: Deactivated successfully. Nov 1 00:44:14.568390 systemd[1]: run-netns-cni\x2d519349a9\x2d1d73\x2d1e22\x2d8341\x2dae531ea06db7.mount: Deactivated successfully. Nov 1 00:44:14.614375 systemd-networkd[1062]: cali29bceb87769: Link UP Nov 1 00:44:14.627253 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali29bceb87769: link becomes ready Nov 1 00:44:14.633490 systemd-networkd[1062]: cali29bceb87769: Gained carrier Nov 1 00:44:14.697294 env[1301]: 2025-11-01 00:44:14.143 [INFO][4140] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0 coredns-668d6bf9bc- kube-system b2c78f36-7235-48aa-baae-2bd9c8a78b81 1008 0 2025-11-01 00:43:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762 coredns-668d6bf9bc-7xkbs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali29bceb87769 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-7xkbs" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-" Nov 1 00:44:14.697294 env[1301]: 2025-11-01 00:44:14.143 [INFO][4140] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-7xkbs" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0" Nov 1 00:44:14.697294 env[1301]: 2025-11-01 00:44:14.330 [INFO][4184] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8" HandleID="k8s-pod-network.b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0" Nov 1 00:44:14.697294 env[1301]: 2025-11-01 00:44:14.338 [INFO][4184] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8" HandleID="k8s-pod-network.b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac6d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", "pod":"coredns-668d6bf9bc-7xkbs", "timestamp":"2025-11-01 00:44:14.330025897 +0000 UTC"}, Hostname:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:44:14.697294 env[1301]: 2025-11-01 00:44:14.339 [INFO][4184] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:14.697294 env[1301]: 2025-11-01 00:44:14.372 [INFO][4184] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:14.697294 env[1301]: 2025-11-01 00:44:14.372 [INFO][4184] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' Nov 1 00:44:14.697294 env[1301]: 2025-11-01 00:44:14.446 [INFO][4184] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.697294 env[1301]: 2025-11-01 00:44:14.476 [INFO][4184] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.697294 env[1301]: 2025-11-01 00:44:14.483 [INFO][4184] ipam/ipam.go 511: Trying affinity for 192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.697294 env[1301]: 2025-11-01 00:44:14.486 [INFO][4184] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.697294 env[1301]: 2025-11-01 00:44:14.491 [INFO][4184] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.697294 env[1301]: 2025-11-01 00:44:14.491 [INFO][4184] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.697294 env[1301]: 2025-11-01 00:44:14.493 [INFO][4184] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8 Nov 1 00:44:14.697294 env[1301]: 2025-11-01 00:44:14.499 [INFO][4184] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.697294 env[1301]: 2025-11-01 00:44:14.514 [INFO][4184] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.70/26] block=192.168.97.64/26 handle="k8s-pod-network.b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.697294 env[1301]: 2025-11-01 00:44:14.514 [INFO][4184] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.70/26] handle="k8s-pod-network.b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.697294 env[1301]: 2025-11-01 00:44:14.514 [INFO][4184] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:14.697294 env[1301]: 2025-11-01 00:44:14.522 [INFO][4184] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.70/26] IPv6=[] ContainerID="b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8" HandleID="k8s-pod-network.b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0" Nov 1 00:44:14.698672 env[1301]: 2025-11-01 00:44:14.564 [INFO][4140] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-7xkbs" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b2c78f36-7235-48aa-baae-2bd9c8a78b81", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"", Pod:"coredns-668d6bf9bc-7xkbs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29bceb87769", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:14.698672 env[1301]: 2025-11-01 00:44:14.581 [INFO][4140] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.70/32] ContainerID="b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-7xkbs" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0" Nov 1 00:44:14.698672 env[1301]: 2025-11-01 00:44:14.581 [INFO][4140] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29bceb87769 ContainerID="b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-7xkbs" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0" Nov 1 00:44:14.698672 env[1301]: 2025-11-01 00:44:14.632 [INFO][4140] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-7xkbs" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0" Nov 1 00:44:14.698672 env[1301]: 2025-11-01 00:44:14.632 [INFO][4140] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-7xkbs" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b2c78f36-7235-48aa-baae-2bd9c8a78b81", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8", Pod:"coredns-668d6bf9bc-7xkbs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29bceb87769", MAC:"96:58:ac:9e:0c:1e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:14.698672 env[1301]: 2025-11-01 00:44:14.664 [INFO][4140] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8" Namespace="kube-system" Pod="coredns-668d6bf9bc-7xkbs" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0" Nov 1 00:44:14.732686 systemd-networkd[1062]: calibfb5717df7a: Link UP Nov 1 00:44:14.744747 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibfb5717df7a: link becomes ready Nov 1 00:44:14.745351 systemd-networkd[1062]: calibfb5717df7a: Gained carrier Nov 1 00:44:14.783247 env[1301]: 2025-11-01 00:44:14.167 [INFO][4143] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0 calico-kube-controllers-f99bc94f9- calico-system 34b32444-f031-47a6-89b0-97775432ade7 1010 0 2025-11-01 00:43:45 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f99bc94f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762 calico-kube-controllers-f99bc94f9-dqr4b eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibfb5717df7a [] [] }} ContainerID="2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce" Namespace="calico-system" Pod="calico-kube-controllers-f99bc94f9-dqr4b" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-" Nov 1 00:44:14.783247 env[1301]: 2025-11-01 00:44:14.167 [INFO][4143] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce" Namespace="calico-system" Pod="calico-kube-controllers-f99bc94f9-dqr4b" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0" Nov 1 00:44:14.783247 env[1301]: 2025-11-01 00:44:14.453 [INFO][4186] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce" HandleID="k8s-pod-network.2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0" Nov 1 00:44:14.783247 env[1301]: 2025-11-01 00:44:14.454 [INFO][4186] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce" HandleID="k8s-pod-network.2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000322df0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", "pod":"calico-kube-controllers-f99bc94f9-dqr4b", "timestamp":"2025-11-01 00:44:14.45394077 +0000 UTC"}, Hostname:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:44:14.783247 env[1301]: 2025-11-01 00:44:14.454 [INFO][4186] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:14.783247 env[1301]: 2025-11-01 00:44:14.514 [INFO][4186] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:14.783247 env[1301]: 2025-11-01 00:44:14.514 [INFO][4186] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' Nov 1 00:44:14.783247 env[1301]: 2025-11-01 00:44:14.597 [INFO][4186] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.783247 env[1301]: 2025-11-01 00:44:14.636 [INFO][4186] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.783247 env[1301]: 2025-11-01 00:44:14.649 [INFO][4186] ipam/ipam.go 511: Trying affinity for 192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.783247 env[1301]: 2025-11-01 00:44:14.653 [INFO][4186] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.783247 env[1301]: 2025-11-01 00:44:14.657 [INFO][4186] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.783247 env[1301]: 2025-11-01 00:44:14.657 [INFO][4186] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.783247 env[1301]: 2025-11-01 00:44:14.664 [INFO][4186] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce Nov 1 00:44:14.783247 env[1301]: 2025-11-01 00:44:14.679 [INFO][4186] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.783247 env[1301]: 2025-11-01 00:44:14.700 [INFO][4186] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.71/26] block=192.168.97.64/26 handle="k8s-pod-network.2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.783247 env[1301]: 2025-11-01 00:44:14.700 [INFO][4186] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.71/26] handle="k8s-pod-network.2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.783247 env[1301]: 2025-11-01 00:44:14.704 [INFO][4186] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:14.783247 env[1301]: 2025-11-01 00:44:14.704 [INFO][4186] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.71/26] IPv6=[] ContainerID="2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce" HandleID="k8s-pod-network.2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0" Nov 1 00:44:14.784825 env[1301]: 2025-11-01 00:44:14.712 [INFO][4143] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce" Namespace="calico-system" Pod="calico-kube-controllers-f99bc94f9-dqr4b" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0", GenerateName:"calico-kube-controllers-f99bc94f9-", Namespace:"calico-system", SelfLink:"", UID:"34b32444-f031-47a6-89b0-97775432ade7", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f99bc94f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"", Pod:"calico-kube-controllers-f99bc94f9-dqr4b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibfb5717df7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:14.784825 env[1301]: 2025-11-01 00:44:14.712 [INFO][4143] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.71/32] ContainerID="2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce" Namespace="calico-system" Pod="calico-kube-controllers-f99bc94f9-dqr4b" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0" Nov 1 00:44:14.784825 env[1301]: 2025-11-01 00:44:14.712 [INFO][4143] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibfb5717df7a ContainerID="2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce" Namespace="calico-system" Pod="calico-kube-controllers-f99bc94f9-dqr4b" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0" Nov 1 00:44:14.784825 env[1301]: 2025-11-01 00:44:14.747 [INFO][4143] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce" Namespace="calico-system" Pod="calico-kube-controllers-f99bc94f9-dqr4b" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0" Nov 1 00:44:14.784825 env[1301]: 2025-11-01 00:44:14.748 [INFO][4143] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce" Namespace="calico-system" Pod="calico-kube-controllers-f99bc94f9-dqr4b" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0", GenerateName:"calico-kube-controllers-f99bc94f9-", Namespace:"calico-system", SelfLink:"", UID:"34b32444-f031-47a6-89b0-97775432ade7", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f99bc94f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce", Pod:"calico-kube-controllers-f99bc94f9-dqr4b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibfb5717df7a", MAC:"da:ad:99:be:95:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:14.784825 env[1301]: 2025-11-01 00:44:14.774 [INFO][4143] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce" Namespace="calico-system" Pod="calico-kube-controllers-f99bc94f9-dqr4b" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0" Nov 1 00:44:14.822683 env[1301]: time="2025-11-01T00:44:14.822604241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4bhnp,Uid:a4e6e4e0-2f67-459b-8ed4-30190e515a8d,Namespace:kube-system,Attempt:1,} returns sandbox id \"4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3\"" Nov 1 00:44:14.829648 env[1301]: time="2025-11-01T00:44:14.829587066Z" level=info msg="CreateContainer within sandbox \"4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:44:14.835000 audit[4283]: NETFILTER_CFG table=filter:117 family=2 entries=76 op=nft_register_chain pid=4283 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:14.835000 audit[4283]: SYSCALL arch=c000003e syscall=46 success=yes exit=39396 a0=3 a1=7ffede2fe160 a2=0 a3=7ffede2fe14c items=0 ppid=3639 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:14.835000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:44:14.854875 systemd-networkd[1062]: cali959a1974d3b: Link UP Nov 1 00:44:14.863328 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali959a1974d3b: link becomes ready Nov 1 00:44:14.864510 systemd-networkd[1062]: cali959a1974d3b: Gained carrier Nov 1 00:44:14.924207 env[1301]: time="2025-11-01T00:44:14.912828030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:14.924207 env[1301]: time="2025-11-01T00:44:14.912928445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:14.924207 env[1301]: time="2025-11-01T00:44:14.913015876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:14.924207 env[1301]: time="2025-11-01T00:44:14.913361487Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8 pid=4290 runtime=io.containerd.runc.v2 Nov 1 00:44:14.924722 env[1301]: 2025-11-01 00:44:14.311 [INFO][4170] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0 calico-apiserver-699db95d94- calico-apiserver 4e92eb00-99ac-4f51-a076-ab8bc59ed374 1009 0 2025-11-01 00:43:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:699db95d94 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762 calico-apiserver-699db95d94-pf9ws eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali959a1974d3b [] [] }} ContainerID="df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7" Namespace="calico-apiserver" Pod="calico-apiserver-699db95d94-pf9ws" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-" Nov 1 00:44:14.924722 env[1301]: 2025-11-01 00:44:14.311 [INFO][4170] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7" Namespace="calico-apiserver" Pod="calico-apiserver-699db95d94-pf9ws" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0" Nov 1 00:44:14.924722 env[1301]: 2025-11-01 00:44:14.501 [INFO][4197] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7" HandleID="k8s-pod-network.df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0" Nov 1 00:44:14.924722 env[1301]: 2025-11-01 00:44:14.501 [INFO][4197] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7" HandleID="k8s-pod-network.df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e4f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", "pod":"calico-apiserver-699db95d94-pf9ws", "timestamp":"2025-11-01 00:44:14.501481887 +0000 UTC"}, Hostname:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:44:14.924722 env[1301]: 2025-11-01 00:44:14.502 [INFO][4197] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:14.924722 env[1301]: 2025-11-01 00:44:14.700 [INFO][4197] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:14.924722 env[1301]: 2025-11-01 00:44:14.700 [INFO][4197] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762' Nov 1 00:44:14.924722 env[1301]: 2025-11-01 00:44:14.719 [INFO][4197] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.924722 env[1301]: 2025-11-01 00:44:14.730 [INFO][4197] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.924722 env[1301]: 2025-11-01 00:44:14.779 [INFO][4197] ipam/ipam.go 511: Trying affinity for 192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.924722 env[1301]: 2025-11-01 00:44:14.786 [INFO][4197] ipam/ipam.go 158: Attempting to load block cidr=192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.924722 env[1301]: 2025-11-01 00:44:14.790 [INFO][4197] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.97.64/26 host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.924722 env[1301]: 2025-11-01 00:44:14.791 [INFO][4197] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.97.64/26 handle="k8s-pod-network.df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.924722 env[1301]: 2025-11-01 00:44:14.793 [INFO][4197] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7 Nov 1 00:44:14.924722 env[1301]: 2025-11-01 00:44:14.816 [INFO][4197] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.97.64/26 handle="k8s-pod-network.df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.924722 env[1301]: 2025-11-01 00:44:14.835 [INFO][4197] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.97.72/26] block=192.168.97.64/26 handle="k8s-pod-network.df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.924722 env[1301]: 2025-11-01 00:44:14.835 [INFO][4197] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.97.72/26] handle="k8s-pod-network.df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7" host="ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762" Nov 1 00:44:14.924722 env[1301]: 2025-11-01 00:44:14.835 [INFO][4197] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:14.924722 env[1301]: 2025-11-01 00:44:14.835 [INFO][4197] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.97.72/26] IPv6=[] ContainerID="df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7" HandleID="k8s-pod-network.df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0" Nov 1 00:44:14.925817 env[1301]: 2025-11-01 00:44:14.843 [INFO][4170] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7" Namespace="calico-apiserver" Pod="calico-apiserver-699db95d94-pf9ws" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0", GenerateName:"calico-apiserver-699db95d94-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e92eb00-99ac-4f51-a076-ab8bc59ed374", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699db95d94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"", Pod:"calico-apiserver-699db95d94-pf9ws", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali959a1974d3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:14.925817 env[1301]: 2025-11-01 00:44:14.843 [INFO][4170] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.97.72/32] ContainerID="df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7" Namespace="calico-apiserver" Pod="calico-apiserver-699db95d94-pf9ws" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0" Nov 1 00:44:14.925817 env[1301]: 2025-11-01 00:44:14.843 [INFO][4170] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali959a1974d3b ContainerID="df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7" Namespace="calico-apiserver" Pod="calico-apiserver-699db95d94-pf9ws" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0" Nov 1 00:44:14.925817 env[1301]: 2025-11-01 00:44:14.866 [INFO][4170] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7" Namespace="calico-apiserver" Pod="calico-apiserver-699db95d94-pf9ws" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0" Nov 1 00:44:14.925817 env[1301]: 2025-11-01 00:44:14.867 [INFO][4170] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7" Namespace="calico-apiserver" Pod="calico-apiserver-699db95d94-pf9ws" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0", GenerateName:"calico-apiserver-699db95d94-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e92eb00-99ac-4f51-a076-ab8bc59ed374", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699db95d94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7", Pod:"calico-apiserver-699db95d94-pf9ws", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali959a1974d3b", MAC:"1e:33:95:5d:40:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:14.925817 env[1301]: 2025-11-01 00:44:14.893 [INFO][4170] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7" Namespace="calico-apiserver" Pod="calico-apiserver-699db95d94-pf9ws" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0" Nov 1 00:44:14.957774 kubelet[2193]: E1101 00:44:14.957543 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699db95d94-lmk4w" podUID="74931096-7bc0-4134-a8b4-61ec9bf5e338" Nov 1 00:44:14.978457 kubelet[2193]: E1101 00:44:14.976053 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fvm9x" podUID="1db94968-800e-4bd7-88c1-2551a090e4ab" Nov 1 00:44:14.996012 env[1301]: time="2025-11-01T00:44:14.995934220Z" level=info msg="CreateContainer within sandbox \"4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"757901208fc02943f448ce278f797cef848c81869e8e28b4a095833689b391cb\"" Nov 1 00:44:15.021263 env[1301]: time="2025-11-01T00:44:15.021141718Z" level=info msg="StartContainer for \"757901208fc02943f448ce278f797cef848c81869e8e28b4a095833689b391cb\"" Nov 1 00:44:15.033274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount275934178.mount: Deactivated successfully. Nov 1 00:44:15.073291 env[1301]: time="2025-11-01T00:44:15.071081972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:15.073291 env[1301]: time="2025-11-01T00:44:15.071153730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:15.073838 env[1301]: time="2025-11-01T00:44:15.073764272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:15.080910 env[1301]: time="2025-11-01T00:44:15.075019896Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce pid=4321 runtime=io.containerd.runc.v2 Nov 1 00:44:15.121384 env[1301]: time="2025-11-01T00:44:15.118537685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:44:15.121384 env[1301]: time="2025-11-01T00:44:15.118639821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:44:15.121384 env[1301]: time="2025-11-01T00:44:15.118685338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:44:15.126653 env[1301]: time="2025-11-01T00:44:15.126556211Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7 pid=4350 runtime=io.containerd.runc.v2 Nov 1 00:44:15.148000 audit[4367]: NETFILTER_CFG table=filter:118 family=2 entries=20 op=nft_register_rule pid=4367 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:15.148000 audit[4367]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc4a03ef20 a2=0 a3=7ffc4a03ef0c items=0 ppid=2355 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:15.148000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:15.161000 audit[4367]: NETFILTER_CFG table=nat:119 family=2 entries=14 op=nft_register_rule pid=4367 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:15.161000 audit[4367]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffc4a03ef20 a2=0 a3=0 items=0 ppid=2355 pid=4367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:15.161000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:15.185000 audit[4374]: NETFILTER_CFG table=filter:120 family=2 entries=61 op=nft_register_chain pid=4374 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:44:15.185000 audit[4374]: SYSCALL arch=c000003e syscall=46 success=yes exit=29016 a0=3 a1=7ffe9b1f8fc0 a2=0 a3=7ffe9b1f8fac items=0 ppid=3639 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:15.185000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:44:15.236694 env[1301]: time="2025-11-01T00:44:15.236523507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-7xkbs,Uid:b2c78f36-7235-48aa-baae-2bd9c8a78b81,Namespace:kube-system,Attempt:1,} returns sandbox id \"b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8\"" Nov 1 00:44:15.246343 env[1301]: time="2025-11-01T00:44:15.246253602Z" level=info msg="CreateContainer within sandbox \"b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:44:15.272195 env[1301]: time="2025-11-01T00:44:15.272096936Z" level=info msg="CreateContainer within sandbox \"b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d7cee7b76e9bcaa3c0b7c3142a15ca0ff6f13c847d17f6b8e49b632fdb536460\"" Nov 1 00:44:15.275207 env[1301]: time="2025-11-01T00:44:15.273454502Z" level=info msg="StartContainer for \"d7cee7b76e9bcaa3c0b7c3142a15ca0ff6f13c847d17f6b8e49b632fdb536460\"" Nov 1 00:44:15.304453 env[1301]: time="2025-11-01T00:44:15.304373960Z" level=info msg="StartContainer for \"757901208fc02943f448ce278f797cef848c81869e8e28b4a095833689b391cb\" returns successfully" Nov 1 00:44:15.480537 env[1301]: time="2025-11-01T00:44:15.480477975Z" level=info msg="StartContainer for \"d7cee7b76e9bcaa3c0b7c3142a15ca0ff6f13c847d17f6b8e49b632fdb536460\" returns successfully" Nov 1 00:44:15.546971 env[1301]: time="2025-11-01T00:44:15.545678560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f99bc94f9-dqr4b,Uid:34b32444-f031-47a6-89b0-97775432ade7,Namespace:calico-system,Attempt:1,} returns sandbox id \"2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce\"" Nov 1 00:44:15.557487 env[1301]: time="2025-11-01T00:44:15.557425048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:44:15.571211 env[1301]: time="2025-11-01T00:44:15.571114390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699db95d94-pf9ws,Uid:4e92eb00-99ac-4f51-a076-ab8bc59ed374,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7\"" Nov 1 00:44:15.771376 env[1301]: time="2025-11-01T00:44:15.771258692Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:15.773315 env[1301]: time="2025-11-01T00:44:15.773202916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:44:15.773682 kubelet[2193]: E1101 00:44:15.773615 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:44:15.773878 kubelet[2193]: E1101 00:44:15.773705 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:44:15.774524 kubelet[2193]: E1101 00:44:15.774415 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-phntk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f99bc94f9-dqr4b_calico-system(34b32444-f031-47a6-89b0-97775432ade7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:15.777350 kubelet[2193]: E1101 00:44:15.776323 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f99bc94f9-dqr4b" podUID="34b32444-f031-47a6-89b0-97775432ade7" Nov 1 00:44:15.778058 env[1301]: time="2025-11-01T00:44:15.777995233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:44:15.890160 systemd-networkd[1062]: cali29bceb87769: Gained IPv6LL Nov 1 00:44:15.905553 kubelet[2193]: E1101 00:44:15.905184 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f99bc94f9-dqr4b" podUID="34b32444-f031-47a6-89b0-97775432ade7" Nov 1 00:44:15.953999 systemd-networkd[1062]: cali959a1974d3b: Gained IPv6LL Nov 1 00:44:15.956975 kubelet[2193]: I1101 00:44:15.956891 2193 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4bhnp" podStartSLOduration=51.956848631 podStartE2EDuration="51.956848631s" podCreationTimestamp="2025-11-01 00:43:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:44:15.955807201 +0000 UTC m=+58.902908593" watchObservedRunningTime="2025-11-01 00:44:15.956848631 +0000 UTC m=+58.903950014" Nov 1 00:44:15.990042 env[1301]: time="2025-11-01T00:44:15.989955576Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:15.998203 env[1301]: time="2025-11-01T00:44:15.995016088Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:44:15.998445 kubelet[2193]: E1101 00:44:15.995787 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:15.998445 kubelet[2193]: E1101 00:44:15.995899 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:15.998445 kubelet[2193]: E1101 00:44:15.996253 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fbpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-699db95d94-pf9ws_calico-apiserver(4e92eb00-99ac-4f51-a076-ab8bc59ed374): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:15.998445 kubelet[2193]: E1101 00:44:15.997790 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699db95d94-pf9ws" podUID="4e92eb00-99ac-4f51-a076-ab8bc59ed374" Nov 1 00:44:16.012478 kubelet[2193]: I1101 00:44:16.012359 2193 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-7xkbs" podStartSLOduration=52.012314229 podStartE2EDuration="52.012314229s" podCreationTimestamp="2025-11-01 00:43:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:44:16.01177268 +0000 UTC m=+58.958874063" watchObservedRunningTime="2025-11-01 00:44:16.012314229 +0000 UTC m=+58.959415613" Nov 1 00:44:16.037000 audit[4493]: NETFILTER_CFG table=filter:121 family=2 entries=17 op=nft_register_rule pid=4493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:16.037000 audit[4493]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc61179500 a2=0 a3=7ffc611794ec items=0 ppid=2355 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:16.037000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:16.042000 audit[4493]: NETFILTER_CFG table=nat:122 family=2 entries=35 op=nft_register_chain pid=4493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:16.042000 audit[4493]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc61179500 a2=0 a3=7ffc611794ec items=0 ppid=2355 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:16.042000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:16.077000 audit[4495]: NETFILTER_CFG table=filter:123 family=2 entries=14 op=nft_register_rule pid=4495 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:16.077000 audit[4495]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeaefadba0 a2=0 a3=7ffeaefadb8c items=0 ppid=2355 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:16.077000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:16.092000 audit[4495]: NETFILTER_CFG table=nat:124 family=2 entries=56 op=nft_register_chain pid=4495 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:16.092000 audit[4495]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffeaefadba0 a2=0 a3=7ffeaefadb8c items=0 ppid=2355 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:16.092000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:16.211348 systemd-networkd[1062]: calibfb5717df7a: Gained IPv6LL Nov 1 00:44:16.273636 systemd-networkd[1062]: calif4af11b6ac1: Gained IPv6LL Nov 1 00:44:16.926323 kubelet[2193]: E1101 00:44:16.926264 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699db95d94-pf9ws" podUID="4e92eb00-99ac-4f51-a076-ab8bc59ed374" Nov 1 00:44:16.927158 kubelet[2193]: E1101 00:44:16.927097 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f99bc94f9-dqr4b" podUID="34b32444-f031-47a6-89b0-97775432ade7" Nov 1 00:44:17.119000 audit[4498]: NETFILTER_CFG table=filter:125 family=2 entries=14 op=nft_register_rule pid=4498 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:17.127489 kernel: kauditd_printk_skb: 32 callbacks suppressed Nov 1 00:44:17.127682 kernel: audit: type=1325 audit(1761957857.119:412): table=filter:125 family=2 entries=14 op=nft_register_rule pid=4498 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:17.146230 kernel: audit: type=1300 audit(1761957857.119:412): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe23b141f0 a2=0 a3=7ffe23b141dc items=0 ppid=2355 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:17.119000 audit[4498]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe23b141f0 a2=0 a3=7ffe23b141dc items=0 ppid=2355 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:17.191803 kernel: audit: type=1327 audit(1761957857.119:412): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:17.119000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:17.192000 audit[4498]: NETFILTER_CFG table=nat:126 family=2 entries=20 op=nft_register_rule pid=4498 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:17.192000 audit[4498]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe23b141f0 a2=0 a3=7ffe23b141dc items=0 ppid=2355 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:17.244838 kernel: audit: type=1325 audit(1761957857.192:413): table=nat:126 family=2 entries=20 op=nft_register_rule pid=4498 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:17.245273 kernel: audit: type=1300 audit(1761957857.192:413): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe23b141f0 a2=0 a3=7ffe23b141dc items=0 ppid=2355 pid=4498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:17.192000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:17.263221 kernel: audit: type=1327 audit(1761957857.192:413): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:17.320537 env[1301]: time="2025-11-01T00:44:17.320469181Z" level=info msg="StopPodSandbox for \"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6\"" Nov 1 00:44:17.420572 env[1301]: 2025-11-01 00:44:17.369 [WARNING][4508] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--5869c7ff56--5ffpp-eth0" Nov 1 00:44:17.420572 env[1301]: 2025-11-01 00:44:17.369 [INFO][4508] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" Nov 1 00:44:17.420572 env[1301]: 2025-11-01 00:44:17.369 [INFO][4508] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" iface="eth0" netns="" Nov 1 00:44:17.420572 env[1301]: 2025-11-01 00:44:17.370 [INFO][4508] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" Nov 1 00:44:17.420572 env[1301]: 2025-11-01 00:44:17.370 [INFO][4508] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" Nov 1 00:44:17.420572 env[1301]: 2025-11-01 00:44:17.404 [INFO][4515] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" HandleID="k8s-pod-network.534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--5869c7ff56--5ffpp-eth0" Nov 1 00:44:17.420572 env[1301]: 2025-11-01 00:44:17.405 [INFO][4515] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:17.420572 env[1301]: 2025-11-01 00:44:17.405 [INFO][4515] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:17.420572 env[1301]: 2025-11-01 00:44:17.414 [WARNING][4515] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" HandleID="k8s-pod-network.534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--5869c7ff56--5ffpp-eth0" Nov 1 00:44:17.420572 env[1301]: 2025-11-01 00:44:17.414 [INFO][4515] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" HandleID="k8s-pod-network.534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--5869c7ff56--5ffpp-eth0" Nov 1 00:44:17.420572 env[1301]: 2025-11-01 00:44:17.417 [INFO][4515] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:17.420572 env[1301]: 2025-11-01 00:44:17.418 [INFO][4508] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" Nov 1 00:44:17.421734 env[1301]: time="2025-11-01T00:44:17.421673165Z" level=info msg="TearDown network for sandbox \"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6\" successfully" Nov 1 00:44:17.421921 env[1301]: time="2025-11-01T00:44:17.421887778Z" level=info msg="StopPodSandbox for \"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6\" returns successfully" Nov 1 00:44:17.425324 env[1301]: time="2025-11-01T00:44:17.423413867Z" level=info msg="RemovePodSandbox for \"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6\"" Nov 1 00:44:17.425324 env[1301]: time="2025-11-01T00:44:17.423478630Z" level=info msg="Forcibly stopping sandbox \"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6\"" Nov 1 00:44:17.533726 env[1301]: 2025-11-01 00:44:17.478 [WARNING][4532] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" WorkloadEndpoint="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--5869c7ff56--5ffpp-eth0" Nov 1 00:44:17.533726 env[1301]: 2025-11-01 00:44:17.478 [INFO][4532] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" Nov 1 00:44:17.533726 env[1301]: 2025-11-01 00:44:17.478 [INFO][4532] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" iface="eth0" netns="" Nov 1 00:44:17.533726 env[1301]: 2025-11-01 00:44:17.478 [INFO][4532] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" Nov 1 00:44:17.533726 env[1301]: 2025-11-01 00:44:17.478 [INFO][4532] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" Nov 1 00:44:17.533726 env[1301]: 2025-11-01 00:44:17.509 [INFO][4539] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" HandleID="k8s-pod-network.534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--5869c7ff56--5ffpp-eth0" Nov 1 00:44:17.533726 env[1301]: 2025-11-01 00:44:17.509 [INFO][4539] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:17.533726 env[1301]: 2025-11-01 00:44:17.509 [INFO][4539] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:17.533726 env[1301]: 2025-11-01 00:44:17.522 [WARNING][4539] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" HandleID="k8s-pod-network.534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--5869c7ff56--5ffpp-eth0" Nov 1 00:44:17.533726 env[1301]: 2025-11-01 00:44:17.522 [INFO][4539] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" HandleID="k8s-pod-network.534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-whisker--5869c7ff56--5ffpp-eth0" Nov 1 00:44:17.533726 env[1301]: 2025-11-01 00:44:17.527 [INFO][4539] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:17.533726 env[1301]: 2025-11-01 00:44:17.529 [INFO][4532] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6" Nov 1 00:44:17.533726 env[1301]: time="2025-11-01T00:44:17.531669147Z" level=info msg="TearDown network for sandbox \"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6\" successfully" Nov 1 00:44:17.542815 env[1301]: time="2025-11-01T00:44:17.542754956Z" level=info msg="RemovePodSandbox \"534cfe13f2d08aebda3413cab7632c8a36310aba967ca3a8a06cd57fca967af6\" returns successfully" Nov 1 00:44:17.543422 env[1301]: time="2025-11-01T00:44:17.543375636Z" level=info msg="StopPodSandbox for \"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a\"" Nov 1 00:44:17.660398 env[1301]: 2025-11-01 00:44:17.600 [WARNING][4554] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b2c78f36-7235-48aa-baae-2bd9c8a78b81", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8", Pod:"coredns-668d6bf9bc-7xkbs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29bceb87769", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:17.660398 env[1301]: 2025-11-01 00:44:17.601 [INFO][4554] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" Nov 1 00:44:17.660398 env[1301]: 2025-11-01 00:44:17.601 [INFO][4554] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" iface="eth0" netns="" Nov 1 00:44:17.660398 env[1301]: 2025-11-01 00:44:17.601 [INFO][4554] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" Nov 1 00:44:17.660398 env[1301]: 2025-11-01 00:44:17.601 [INFO][4554] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" Nov 1 00:44:17.660398 env[1301]: 2025-11-01 00:44:17.639 [INFO][4561] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" HandleID="k8s-pod-network.22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0" Nov 1 00:44:17.660398 env[1301]: 2025-11-01 00:44:17.644 [INFO][4561] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:17.660398 env[1301]: 2025-11-01 00:44:17.644 [INFO][4561] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:17.660398 env[1301]: 2025-11-01 00:44:17.654 [WARNING][4561] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" HandleID="k8s-pod-network.22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0" Nov 1 00:44:17.660398 env[1301]: 2025-11-01 00:44:17.654 [INFO][4561] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" HandleID="k8s-pod-network.22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0" Nov 1 00:44:17.660398 env[1301]: 2025-11-01 00:44:17.656 [INFO][4561] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:17.660398 env[1301]: 2025-11-01 00:44:17.658 [INFO][4554] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" Nov 1 00:44:17.661403 env[1301]: time="2025-11-01T00:44:17.660446003Z" level=info msg="TearDown network for sandbox \"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a\" successfully" Nov 1 00:44:17.661403 env[1301]: time="2025-11-01T00:44:17.660503783Z" level=info msg="StopPodSandbox for \"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a\" returns successfully" Nov 1 00:44:17.661403 env[1301]: time="2025-11-01T00:44:17.661291331Z" level=info msg="RemovePodSandbox for \"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a\"" Nov 1 00:44:17.661403 env[1301]: time="2025-11-01T00:44:17.661344854Z" level=info msg="Forcibly stopping sandbox \"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a\"" Nov 1 00:44:17.780876 env[1301]: 2025-11-01 00:44:17.714 [WARNING][4575] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b2c78f36-7235-48aa-baae-2bd9c8a78b81", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"b179166167ce9caa629963198ec224b28370ba210dbf5f87aaa63970b85705e8", Pod:"coredns-668d6bf9bc-7xkbs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali29bceb87769", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:17.780876 env[1301]: 2025-11-01 00:44:17.715 [INFO][4575] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" Nov 1 00:44:17.780876 env[1301]: 2025-11-01 00:44:17.715 [INFO][4575] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" iface="eth0" netns="" Nov 1 00:44:17.780876 env[1301]: 2025-11-01 00:44:17.715 [INFO][4575] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" Nov 1 00:44:17.780876 env[1301]: 2025-11-01 00:44:17.715 [INFO][4575] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" Nov 1 00:44:17.780876 env[1301]: 2025-11-01 00:44:17.750 [INFO][4582] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" HandleID="k8s-pod-network.22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0" Nov 1 00:44:17.780876 env[1301]: 2025-11-01 00:44:17.751 [INFO][4582] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:17.780876 env[1301]: 2025-11-01 00:44:17.751 [INFO][4582] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:17.780876 env[1301]: 2025-11-01 00:44:17.761 [WARNING][4582] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" HandleID="k8s-pod-network.22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0" Nov 1 00:44:17.780876 env[1301]: 2025-11-01 00:44:17.761 [INFO][4582] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" HandleID="k8s-pod-network.22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--7xkbs-eth0" Nov 1 00:44:17.780876 env[1301]: 2025-11-01 00:44:17.767 [INFO][4582] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:17.780876 env[1301]: 2025-11-01 00:44:17.777 [INFO][4575] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a" Nov 1 00:44:17.786595 env[1301]: time="2025-11-01T00:44:17.786398675Z" level=info msg="TearDown network for sandbox \"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a\" successfully" Nov 1 00:44:17.808207 env[1301]: time="2025-11-01T00:44:17.808113487Z" level=info msg="RemovePodSandbox \"22e3a8a58a21726252f30e30ac4bfe13606f28b514b9ec1837271a9f87cb635a\" returns successfully" Nov 1 00:44:17.809356 env[1301]: time="2025-11-01T00:44:17.809308354Z" level=info msg="StopPodSandbox for \"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c\"" Nov 1 00:44:17.916041 env[1301]: 2025-11-01 00:44:17.865 [WARNING][4597] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1db94968-800e-4bd7-88c1-2551a090e4ab", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b", Pod:"csi-node-driver-fvm9x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie5efbfc4fa4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:17.916041 env[1301]: 2025-11-01 00:44:17.865 [INFO][4597] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" Nov 1 00:44:17.916041 env[1301]: 2025-11-01 00:44:17.866 [INFO][4597] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" iface="eth0" netns="" Nov 1 00:44:17.916041 env[1301]: 2025-11-01 00:44:17.866 [INFO][4597] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" Nov 1 00:44:17.916041 env[1301]: 2025-11-01 00:44:17.866 [INFO][4597] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" Nov 1 00:44:17.916041 env[1301]: 2025-11-01 00:44:17.899 [INFO][4604] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" HandleID="k8s-pod-network.8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0" Nov 1 00:44:17.916041 env[1301]: 2025-11-01 00:44:17.899 [INFO][4604] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:17.916041 env[1301]: 2025-11-01 00:44:17.900 [INFO][4604] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:17.916041 env[1301]: 2025-11-01 00:44:17.909 [WARNING][4604] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" HandleID="k8s-pod-network.8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0" Nov 1 00:44:17.916041 env[1301]: 2025-11-01 00:44:17.909 [INFO][4604] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" HandleID="k8s-pod-network.8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0" Nov 1 00:44:17.916041 env[1301]: 2025-11-01 00:44:17.911 [INFO][4604] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:17.916041 env[1301]: 2025-11-01 00:44:17.913 [INFO][4597] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" Nov 1 00:44:17.917240 env[1301]: time="2025-11-01T00:44:17.917144007Z" level=info msg="TearDown network for sandbox \"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c\" successfully" Nov 1 00:44:17.917891 env[1301]: time="2025-11-01T00:44:17.917240579Z" level=info msg="StopPodSandbox for \"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c\" returns successfully" Nov 1 00:44:17.918441 env[1301]: time="2025-11-01T00:44:17.918394775Z" level=info msg="RemovePodSandbox for \"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c\"" Nov 1 00:44:17.918594 env[1301]: time="2025-11-01T00:44:17.918457583Z" level=info msg="Forcibly stopping sandbox \"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c\"" Nov 1 00:44:18.019116 env[1301]: 2025-11-01 00:44:17.973 [WARNING][4618] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1db94968-800e-4bd7-88c1-2551a090e4ab", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"955e8da4c063714cb24a75c1cdce638732b7e46804aabae37b1649b31157667b", Pod:"csi-node-driver-fvm9x", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.97.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie5efbfc4fa4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:18.019116 env[1301]: 2025-11-01 00:44:17.973 [INFO][4618] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" Nov 1 00:44:18.019116 env[1301]: 2025-11-01 00:44:17.973 [INFO][4618] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" iface="eth0" netns="" Nov 1 00:44:18.019116 env[1301]: 2025-11-01 00:44:17.973 [INFO][4618] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" Nov 1 00:44:18.019116 env[1301]: 2025-11-01 00:44:17.973 [INFO][4618] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" Nov 1 00:44:18.019116 env[1301]: 2025-11-01 00:44:18.004 [INFO][4626] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" HandleID="k8s-pod-network.8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0" Nov 1 00:44:18.019116 env[1301]: 2025-11-01 00:44:18.004 [INFO][4626] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:18.019116 env[1301]: 2025-11-01 00:44:18.004 [INFO][4626] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:18.019116 env[1301]: 2025-11-01 00:44:18.013 [WARNING][4626] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" HandleID="k8s-pod-network.8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0" Nov 1 00:44:18.019116 env[1301]: 2025-11-01 00:44:18.013 [INFO][4626] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" HandleID="k8s-pod-network.8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-csi--node--driver--fvm9x-eth0" Nov 1 00:44:18.019116 env[1301]: 2025-11-01 00:44:18.015 [INFO][4626] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:18.019116 env[1301]: 2025-11-01 00:44:18.017 [INFO][4618] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c" Nov 1 00:44:18.020016 env[1301]: time="2025-11-01T00:44:18.019192568Z" level=info msg="TearDown network for sandbox \"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c\" successfully" Nov 1 00:44:18.024165 env[1301]: time="2025-11-01T00:44:18.024110517Z" level=info msg="RemovePodSandbox \"8bb582079cb2fb0a49b4fa469308c18282efa2a7b429073baa1d8356a342048c\" returns successfully" Nov 1 00:44:18.024963 env[1301]: time="2025-11-01T00:44:18.024913094Z" level=info msg="StopPodSandbox for \"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052\"" Nov 1 00:44:18.126449 env[1301]: 2025-11-01 00:44:18.083 [WARNING][4641] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0", GenerateName:"calico-apiserver-699db95d94-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e92eb00-99ac-4f51-a076-ab8bc59ed374", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699db95d94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7", Pod:"calico-apiserver-699db95d94-pf9ws", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali959a1974d3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:18.126449 env[1301]: 2025-11-01 00:44:18.083 [INFO][4641] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" Nov 1 00:44:18.126449 env[1301]: 2025-11-01 00:44:18.083 [INFO][4641] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" iface="eth0" netns="" Nov 1 00:44:18.126449 env[1301]: 2025-11-01 00:44:18.083 [INFO][4641] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" Nov 1 00:44:18.126449 env[1301]: 2025-11-01 00:44:18.083 [INFO][4641] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" Nov 1 00:44:18.126449 env[1301]: 2025-11-01 00:44:18.111 [INFO][4648] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" HandleID="k8s-pod-network.7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0" Nov 1 00:44:18.126449 env[1301]: 2025-11-01 00:44:18.112 [INFO][4648] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:18.126449 env[1301]: 2025-11-01 00:44:18.112 [INFO][4648] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:18.126449 env[1301]: 2025-11-01 00:44:18.121 [WARNING][4648] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" HandleID="k8s-pod-network.7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0" Nov 1 00:44:18.126449 env[1301]: 2025-11-01 00:44:18.121 [INFO][4648] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" HandleID="k8s-pod-network.7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0" Nov 1 00:44:18.126449 env[1301]: 2025-11-01 00:44:18.123 [INFO][4648] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:18.126449 env[1301]: 2025-11-01 00:44:18.124 [INFO][4641] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" Nov 1 00:44:18.127192 env[1301]: time="2025-11-01T00:44:18.127118865Z" level=info msg="TearDown network for sandbox \"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052\" successfully" Nov 1 00:44:18.127383 env[1301]: time="2025-11-01T00:44:18.127328383Z" level=info msg="StopPodSandbox for \"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052\" returns successfully" Nov 1 00:44:18.128229 env[1301]: time="2025-11-01T00:44:18.128164775Z" level=info msg="RemovePodSandbox for \"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052\"" Nov 1 00:44:18.128496 env[1301]: time="2025-11-01T00:44:18.128399970Z" level=info msg="Forcibly stopping sandbox \"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052\"" Nov 1 00:44:18.213420 env[1301]: 2025-11-01 00:44:18.170 [WARNING][4662] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0", GenerateName:"calico-apiserver-699db95d94-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e92eb00-99ac-4f51-a076-ab8bc59ed374", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699db95d94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"df7ca9bce7d126dcc54d8456b199631155d3896085a3c60aa5da7c60e0feaba7", Pod:"calico-apiserver-699db95d94-pf9ws", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali959a1974d3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:18.213420 env[1301]: 2025-11-01 00:44:18.171 [INFO][4662] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" Nov 1 00:44:18.213420 env[1301]: 2025-11-01 00:44:18.171 [INFO][4662] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" iface="eth0" netns="" Nov 1 00:44:18.213420 env[1301]: 2025-11-01 00:44:18.171 [INFO][4662] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" Nov 1 00:44:18.213420 env[1301]: 2025-11-01 00:44:18.171 [INFO][4662] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" Nov 1 00:44:18.213420 env[1301]: 2025-11-01 00:44:18.197 [INFO][4669] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" HandleID="k8s-pod-network.7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0" Nov 1 00:44:18.213420 env[1301]: 2025-11-01 00:44:18.197 [INFO][4669] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:18.213420 env[1301]: 2025-11-01 00:44:18.197 [INFO][4669] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:18.213420 env[1301]: 2025-11-01 00:44:18.207 [WARNING][4669] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" HandleID="k8s-pod-network.7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0" Nov 1 00:44:18.213420 env[1301]: 2025-11-01 00:44:18.207 [INFO][4669] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" HandleID="k8s-pod-network.7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--pf9ws-eth0" Nov 1 00:44:18.213420 env[1301]: 2025-11-01 00:44:18.209 [INFO][4669] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:18.213420 env[1301]: 2025-11-01 00:44:18.211 [INFO][4662] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052" Nov 1 00:44:18.214444 env[1301]: time="2025-11-01T00:44:18.213474136Z" level=info msg="TearDown network for sandbox \"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052\" successfully" Nov 1 00:44:18.221520 env[1301]: time="2025-11-01T00:44:18.219079302Z" level=info msg="RemovePodSandbox \"7fb632302f0bc92359bfeb97ef310d8ea6729f13a9b5059432475cb564c08052\" returns successfully" Nov 1 00:44:18.221520 env[1301]: time="2025-11-01T00:44:18.219968085Z" level=info msg="StopPodSandbox for \"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3\"" Nov 1 00:44:18.312758 env[1301]: 2025-11-01 00:44:18.269 [WARNING][4685] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"bb5676df-eb26-4a3d-9a39-dc277ac29b28", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4", Pod:"goldmane-666569f655-xt7wl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia9aba9247e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:18.312758 env[1301]: 2025-11-01 00:44:18.269 [INFO][4685] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" Nov 1 00:44:18.312758 env[1301]: 2025-11-01 00:44:18.269 [INFO][4685] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" iface="eth0" netns="" Nov 1 00:44:18.312758 env[1301]: 2025-11-01 00:44:18.269 [INFO][4685] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" Nov 1 00:44:18.312758 env[1301]: 2025-11-01 00:44:18.269 [INFO][4685] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" Nov 1 00:44:18.312758 env[1301]: 2025-11-01 00:44:18.299 [INFO][4692] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" HandleID="k8s-pod-network.6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0" Nov 1 00:44:18.312758 env[1301]: 2025-11-01 00:44:18.299 [INFO][4692] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:18.312758 env[1301]: 2025-11-01 00:44:18.300 [INFO][4692] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:18.312758 env[1301]: 2025-11-01 00:44:18.307 [WARNING][4692] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" HandleID="k8s-pod-network.6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0" Nov 1 00:44:18.312758 env[1301]: 2025-11-01 00:44:18.307 [INFO][4692] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" HandleID="k8s-pod-network.6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0" Nov 1 00:44:18.312758 env[1301]: 2025-11-01 00:44:18.309 [INFO][4692] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:18.312758 env[1301]: 2025-11-01 00:44:18.311 [INFO][4685] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" Nov 1 00:44:18.315262 env[1301]: time="2025-11-01T00:44:18.313675309Z" level=info msg="TearDown network for sandbox \"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3\" successfully" Nov 1 00:44:18.315262 env[1301]: time="2025-11-01T00:44:18.314271681Z" level=info msg="StopPodSandbox for \"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3\" returns successfully" Nov 1 00:44:18.315648 env[1301]: time="2025-11-01T00:44:18.315395043Z" level=info msg="RemovePodSandbox for \"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3\"" Nov 1 00:44:18.315648 env[1301]: time="2025-11-01T00:44:18.315451247Z" level=info msg="Forcibly stopping sandbox \"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3\"" Nov 1 00:44:18.409256 env[1301]: 2025-11-01 00:44:18.364 [WARNING][4707] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"bb5676df-eb26-4a3d-9a39-dc277ac29b28", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"d7cd9fd77f88b6b66516b3e91558b918d3cc8375b0c46c32719768ebb561b0d4", Pod:"goldmane-666569f655-xt7wl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.97.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia9aba9247e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:18.409256 env[1301]: 2025-11-01 00:44:18.364 [INFO][4707] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" Nov 1 00:44:18.409256 env[1301]: 2025-11-01 00:44:18.364 [INFO][4707] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" iface="eth0" netns="" Nov 1 00:44:18.409256 env[1301]: 2025-11-01 00:44:18.364 [INFO][4707] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" Nov 1 00:44:18.409256 env[1301]: 2025-11-01 00:44:18.364 [INFO][4707] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" Nov 1 00:44:18.409256 env[1301]: 2025-11-01 00:44:18.391 [INFO][4714] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" HandleID="k8s-pod-network.6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0" Nov 1 00:44:18.409256 env[1301]: 2025-11-01 00:44:18.392 [INFO][4714] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:18.409256 env[1301]: 2025-11-01 00:44:18.392 [INFO][4714] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:18.409256 env[1301]: 2025-11-01 00:44:18.401 [WARNING][4714] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" HandleID="k8s-pod-network.6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0" Nov 1 00:44:18.409256 env[1301]: 2025-11-01 00:44:18.401 [INFO][4714] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" HandleID="k8s-pod-network.6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-goldmane--666569f655--xt7wl-eth0" Nov 1 00:44:18.409256 env[1301]: 2025-11-01 00:44:18.403 [INFO][4714] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:18.409256 env[1301]: 2025-11-01 00:44:18.405 [INFO][4707] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3" Nov 1 00:44:18.409256 env[1301]: time="2025-11-01T00:44:18.406558631Z" level=info msg="TearDown network for sandbox \"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3\" successfully" Nov 1 00:44:18.412090 env[1301]: time="2025-11-01T00:44:18.411996322Z" level=info msg="RemovePodSandbox \"6fed3a4cfc52854a37a18e75e47727e479e9c0dbc4e12f997cc380f770732cd3\" returns successfully" Nov 1 00:44:18.412869 env[1301]: time="2025-11-01T00:44:18.412818731Z" level=info msg="StopPodSandbox for \"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3\"" Nov 1 00:44:18.515508 env[1301]: 2025-11-01 00:44:18.465 [WARNING][4729] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0", GenerateName:"calico-kube-controllers-f99bc94f9-", Namespace:"calico-system", SelfLink:"", UID:"34b32444-f031-47a6-89b0-97775432ade7", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f99bc94f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce", Pod:"calico-kube-controllers-f99bc94f9-dqr4b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibfb5717df7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:18.515508 env[1301]: 2025-11-01 00:44:18.466 [INFO][4729] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" Nov 1 00:44:18.515508 env[1301]: 2025-11-01 00:44:18.466 [INFO][4729] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" iface="eth0" netns="" Nov 1 00:44:18.515508 env[1301]: 2025-11-01 00:44:18.466 [INFO][4729] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" Nov 1 00:44:18.515508 env[1301]: 2025-11-01 00:44:18.466 [INFO][4729] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" Nov 1 00:44:18.515508 env[1301]: 2025-11-01 00:44:18.495 [INFO][4736] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" HandleID="k8s-pod-network.e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0" Nov 1 00:44:18.515508 env[1301]: 2025-11-01 00:44:18.495 [INFO][4736] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:18.515508 env[1301]: 2025-11-01 00:44:18.495 [INFO][4736] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:18.515508 env[1301]: 2025-11-01 00:44:18.510 [WARNING][4736] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" HandleID="k8s-pod-network.e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0" Nov 1 00:44:18.515508 env[1301]: 2025-11-01 00:44:18.510 [INFO][4736] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" HandleID="k8s-pod-network.e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0" Nov 1 00:44:18.515508 env[1301]: 2025-11-01 00:44:18.512 [INFO][4736] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:18.515508 env[1301]: 2025-11-01 00:44:18.513 [INFO][4729] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" Nov 1 00:44:18.516136 env[1301]: time="2025-11-01T00:44:18.515533589Z" level=info msg="TearDown network for sandbox \"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3\" successfully" Nov 1 00:44:18.516136 env[1301]: time="2025-11-01T00:44:18.515577000Z" level=info msg="StopPodSandbox for \"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3\" returns successfully" Nov 1 00:44:18.516881 env[1301]: time="2025-11-01T00:44:18.516842533Z" level=info msg="RemovePodSandbox for \"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3\"" Nov 1 00:44:18.517022 env[1301]: time="2025-11-01T00:44:18.516890661Z" level=info msg="Forcibly stopping sandbox \"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3\"" Nov 1 00:44:18.605666 env[1301]: 2025-11-01 00:44:18.562 [WARNING][4751] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0", GenerateName:"calico-kube-controllers-f99bc94f9-", Namespace:"calico-system", SelfLink:"", UID:"34b32444-f031-47a6-89b0-97775432ade7", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f99bc94f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"2fa737ecf4ead43b7e2b96a31792967a2bff1c0cb99bc1f099a2645f8a4908ce", Pod:"calico-kube-controllers-f99bc94f9-dqr4b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.97.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibfb5717df7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:18.605666 env[1301]: 2025-11-01 00:44:18.563 [INFO][4751] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" Nov 1 00:44:18.605666 env[1301]: 2025-11-01 00:44:18.563 [INFO][4751] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" iface="eth0" netns="" Nov 1 00:44:18.605666 env[1301]: 2025-11-01 00:44:18.563 [INFO][4751] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" Nov 1 00:44:18.605666 env[1301]: 2025-11-01 00:44:18.563 [INFO][4751] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" Nov 1 00:44:18.605666 env[1301]: 2025-11-01 00:44:18.591 [INFO][4758] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" HandleID="k8s-pod-network.e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0" Nov 1 00:44:18.605666 env[1301]: 2025-11-01 00:44:18.591 [INFO][4758] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:18.605666 env[1301]: 2025-11-01 00:44:18.591 [INFO][4758] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:18.605666 env[1301]: 2025-11-01 00:44:18.600 [WARNING][4758] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" HandleID="k8s-pod-network.e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0" Nov 1 00:44:18.605666 env[1301]: 2025-11-01 00:44:18.600 [INFO][4758] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" HandleID="k8s-pod-network.e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--kube--controllers--f99bc94f9--dqr4b-eth0" Nov 1 00:44:18.605666 env[1301]: 2025-11-01 00:44:18.602 [INFO][4758] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:18.605666 env[1301]: 2025-11-01 00:44:18.603 [INFO][4751] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3" Nov 1 00:44:18.606534 env[1301]: time="2025-11-01T00:44:18.605710416Z" level=info msg="TearDown network for sandbox \"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3\" successfully" Nov 1 00:44:18.610459 env[1301]: time="2025-11-01T00:44:18.610404121Z" level=info msg="RemovePodSandbox \"e6af6858e2558942b0e6ae377165a32fb50b8b89b32f26b98c00d202da6f34b3\" returns successfully" Nov 1 00:44:18.611125 env[1301]: time="2025-11-01T00:44:18.611072613Z" level=info msg="StopPodSandbox for \"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773\"" Nov 1 00:44:18.701107 env[1301]: 2025-11-01 00:44:18.658 [WARNING][4773] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a4e6e4e0-2f67-459b-8ed4-30190e515a8d", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3", Pod:"coredns-668d6bf9bc-4bhnp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif4af11b6ac1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:18.701107 env[1301]: 2025-11-01 00:44:18.658 [INFO][4773] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" Nov 1 00:44:18.701107 env[1301]: 2025-11-01 00:44:18.658 [INFO][4773] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" iface="eth0" netns="" Nov 1 00:44:18.701107 env[1301]: 2025-11-01 00:44:18.658 [INFO][4773] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" Nov 1 00:44:18.701107 env[1301]: 2025-11-01 00:44:18.658 [INFO][4773] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" Nov 1 00:44:18.701107 env[1301]: 2025-11-01 00:44:18.685 [INFO][4780] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" HandleID="k8s-pod-network.6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0" Nov 1 00:44:18.701107 env[1301]: 2025-11-01 00:44:18.687 [INFO][4780] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:18.701107 env[1301]: 2025-11-01 00:44:18.687 [INFO][4780] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:18.701107 env[1301]: 2025-11-01 00:44:18.696 [WARNING][4780] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" HandleID="k8s-pod-network.6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0" Nov 1 00:44:18.701107 env[1301]: 2025-11-01 00:44:18.696 [INFO][4780] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" HandleID="k8s-pod-network.6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0" Nov 1 00:44:18.701107 env[1301]: 2025-11-01 00:44:18.698 [INFO][4780] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:18.701107 env[1301]: 2025-11-01 00:44:18.699 [INFO][4773] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" Nov 1 00:44:18.703725 env[1301]: time="2025-11-01T00:44:18.701056722Z" level=info msg="TearDown network for sandbox \"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773\" successfully" Nov 1 00:44:18.703725 env[1301]: time="2025-11-01T00:44:18.703656042Z" level=info msg="StopPodSandbox for \"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773\" returns successfully" Nov 1 00:44:18.704341 env[1301]: time="2025-11-01T00:44:18.704299208Z" level=info msg="RemovePodSandbox for \"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773\"" Nov 1 00:44:18.704466 env[1301]: time="2025-11-01T00:44:18.704353338Z" level=info msg="Forcibly stopping sandbox \"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773\"" Nov 1 00:44:18.810102 env[1301]: 2025-11-01 00:44:18.765 [WARNING][4795] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a4e6e4e0-2f67-459b-8ed4-30190e515a8d", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"4df2b88ca50749ecf594ce193021106dfbf2b431a1a516d685e454a2bc933fe3", Pod:"coredns-668d6bf9bc-4bhnp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.97.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif4af11b6ac1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:18.810102 env[1301]: 2025-11-01 00:44:18.765 [INFO][4795] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" Nov 1 00:44:18.810102 env[1301]: 2025-11-01 00:44:18.765 [INFO][4795] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" iface="eth0" netns="" Nov 1 00:44:18.810102 env[1301]: 2025-11-01 00:44:18.765 [INFO][4795] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" Nov 1 00:44:18.810102 env[1301]: 2025-11-01 00:44:18.765 [INFO][4795] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" Nov 1 00:44:18.810102 env[1301]: 2025-11-01 00:44:18.796 [INFO][4803] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" HandleID="k8s-pod-network.6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0" Nov 1 00:44:18.810102 env[1301]: 2025-11-01 00:44:18.796 [INFO][4803] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:18.810102 env[1301]: 2025-11-01 00:44:18.796 [INFO][4803] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:18.810102 env[1301]: 2025-11-01 00:44:18.805 [WARNING][4803] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" HandleID="k8s-pod-network.6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0" Nov 1 00:44:18.810102 env[1301]: 2025-11-01 00:44:18.805 [INFO][4803] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" HandleID="k8s-pod-network.6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-coredns--668d6bf9bc--4bhnp-eth0" Nov 1 00:44:18.810102 env[1301]: 2025-11-01 00:44:18.807 [INFO][4803] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:18.810102 env[1301]: 2025-11-01 00:44:18.808 [INFO][4795] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773" Nov 1 00:44:18.811234 env[1301]: time="2025-11-01T00:44:18.811151627Z" level=info msg="TearDown network for sandbox \"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773\" successfully" Nov 1 00:44:18.816799 env[1301]: time="2025-11-01T00:44:18.816727559Z" level=info msg="RemovePodSandbox \"6fb843558769260d8558941135d887e39a3faa2148e2785b697a2bf455041773\" returns successfully" Nov 1 00:44:18.817535 env[1301]: time="2025-11-01T00:44:18.817479898Z" level=info msg="StopPodSandbox for \"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112\"" Nov 1 00:44:18.908261 env[1301]: 2025-11-01 00:44:18.862 [WARNING][4818] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0", GenerateName:"calico-apiserver-699db95d94-", Namespace:"calico-apiserver", SelfLink:"", UID:"74931096-7bc0-4134-a8b4-61ec9bf5e338", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699db95d94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7", Pod:"calico-apiserver-699db95d94-lmk4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a4beef0d70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:18.908261 env[1301]: 2025-11-01 00:44:18.863 [INFO][4818] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" Nov 1 00:44:18.908261 env[1301]: 2025-11-01 00:44:18.863 [INFO][4818] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" iface="eth0" netns="" Nov 1 00:44:18.908261 env[1301]: 2025-11-01 00:44:18.863 [INFO][4818] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" Nov 1 00:44:18.908261 env[1301]: 2025-11-01 00:44:18.863 [INFO][4818] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" Nov 1 00:44:18.908261 env[1301]: 2025-11-01 00:44:18.891 [INFO][4825] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" HandleID="k8s-pod-network.741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0" Nov 1 00:44:18.908261 env[1301]: 2025-11-01 00:44:18.892 [INFO][4825] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:18.908261 env[1301]: 2025-11-01 00:44:18.892 [INFO][4825] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:18.908261 env[1301]: 2025-11-01 00:44:18.903 [WARNING][4825] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" HandleID="k8s-pod-network.741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0" Nov 1 00:44:18.908261 env[1301]: 2025-11-01 00:44:18.903 [INFO][4825] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" HandleID="k8s-pod-network.741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0" Nov 1 00:44:18.908261 env[1301]: 2025-11-01 00:44:18.905 [INFO][4825] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:18.908261 env[1301]: 2025-11-01 00:44:18.906 [INFO][4818] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" Nov 1 00:44:18.909111 env[1301]: time="2025-11-01T00:44:18.908313304Z" level=info msg="TearDown network for sandbox \"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112\" successfully" Nov 1 00:44:18.909111 env[1301]: time="2025-11-01T00:44:18.908358475Z" level=info msg="StopPodSandbox for \"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112\" returns successfully" Nov 1 00:44:18.909111 env[1301]: time="2025-11-01T00:44:18.908999515Z" level=info msg="RemovePodSandbox for \"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112\"" Nov 1 00:44:18.909111 env[1301]: time="2025-11-01T00:44:18.909044575Z" level=info msg="Forcibly stopping sandbox \"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112\"" Nov 1 00:44:19.052197 env[1301]: 2025-11-01 00:44:18.960 [WARNING][4839] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0", GenerateName:"calico-apiserver-699db95d94-", Namespace:"calico-apiserver", SelfLink:"", UID:"74931096-7bc0-4134-a8b4-61ec9bf5e338", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699db95d94", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510-3-8-nightly-20251031-2100-553bfd7f9463bd55c762", ContainerID:"afa8aa2882239be15d2f0895965c7ed0d798419ab4909f98a0e655baa71f3ae7", Pod:"calico-apiserver-699db95d94-lmk4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.97.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6a4beef0d70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:44:19.052197 env[1301]: 2025-11-01 00:44:18.961 [INFO][4839] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" Nov 1 00:44:19.052197 env[1301]: 2025-11-01 00:44:18.961 [INFO][4839] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" iface="eth0" netns="" Nov 1 00:44:19.052197 env[1301]: 2025-11-01 00:44:18.961 [INFO][4839] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" Nov 1 00:44:19.052197 env[1301]: 2025-11-01 00:44:18.961 [INFO][4839] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" Nov 1 00:44:19.052197 env[1301]: 2025-11-01 00:44:19.007 [INFO][4846] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" HandleID="k8s-pod-network.741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0" Nov 1 00:44:19.052197 env[1301]: 2025-11-01 00:44:19.009 [INFO][4846] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:44:19.052197 env[1301]: 2025-11-01 00:44:19.009 [INFO][4846] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:44:19.052197 env[1301]: 2025-11-01 00:44:19.036 [WARNING][4846] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" HandleID="k8s-pod-network.741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0" Nov 1 00:44:19.052197 env[1301]: 2025-11-01 00:44:19.036 [INFO][4846] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" HandleID="k8s-pod-network.741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" Workload="ci--3510--3--8--nightly--20251031--2100--553bfd7f9463bd55c762-k8s-calico--apiserver--699db95d94--lmk4w-eth0" Nov 1 00:44:19.052197 env[1301]: 2025-11-01 00:44:19.040 [INFO][4846] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:44:19.052197 env[1301]: 2025-11-01 00:44:19.043 [INFO][4839] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112" Nov 1 00:44:19.053332 env[1301]: time="2025-11-01T00:44:19.053266857Z" level=info msg="TearDown network for sandbox \"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112\" successfully" Nov 1 00:44:19.059560 env[1301]: time="2025-11-01T00:44:19.059414756Z" level=info msg="RemovePodSandbox \"741f551cb81629a10309083b2d1ba32a61eba43e96433938306d18605252a112\" returns successfully" Nov 1 00:44:25.393955 env[1301]: time="2025-11-01T00:44:25.393891199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:44:25.598804 env[1301]: time="2025-11-01T00:44:25.598717758Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:25.600999 env[1301]: time="2025-11-01T00:44:25.600882128Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:44:25.601408 kubelet[2193]: E1101 00:44:25.601327 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:44:25.602042 kubelet[2193]: E1101 00:44:25.601416 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:44:25.602042 kubelet[2193]: E1101 00:44:25.601860 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qm7px,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xt7wl_calico-system(bb5676df-eb26-4a3d-9a39-dc277ac29b28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:25.603126 env[1301]: time="2025-11-01T00:44:25.603069786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:44:25.603946 kubelet[2193]: E1101 00:44:25.603874 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xt7wl" podUID="bb5676df-eb26-4a3d-9a39-dc277ac29b28" Nov 1 00:44:25.803366 env[1301]: time="2025-11-01T00:44:25.803280017Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:25.805244 env[1301]: time="2025-11-01T00:44:25.805131686Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:44:25.805600 kubelet[2193]: E1101 00:44:25.805536 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:44:25.805772 kubelet[2193]: E1101 00:44:25.805622 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:44:25.805869 kubelet[2193]: E1101 00:44:25.805807 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:082e0ea68660410580a68d5ee8e902f4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fkph4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-768cf9cc9d-2cqdw_calico-system(bae1cc02-5d35-4e6c-8d44-6ad010de9d41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:25.809069 env[1301]: time="2025-11-01T00:44:25.809025047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:44:26.017826 env[1301]: time="2025-11-01T00:44:26.017708053Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:26.019553 env[1301]: time="2025-11-01T00:44:26.019456224Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:44:26.019976 kubelet[2193]: E1101 00:44:26.019897 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:44:26.020122 kubelet[2193]: E1101 00:44:26.019984 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:44:26.020382 kubelet[2193]: E1101 00:44:26.020317 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkph4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-768cf9cc9d-2cqdw_calico-system(bae1cc02-5d35-4e6c-8d44-6ad010de9d41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:26.022219 kubelet[2193]: E1101 00:44:26.021569 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768cf9cc9d-2cqdw" podUID="bae1cc02-5d35-4e6c-8d44-6ad010de9d41" Nov 1 00:44:26.393289 env[1301]: time="2025-11-01T00:44:26.393219607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:44:26.603602 env[1301]: time="2025-11-01T00:44:26.603503931Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:26.605247 env[1301]: time="2025-11-01T00:44:26.605117894Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:44:26.605694 kubelet[2193]: E1101 00:44:26.605622 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:26.606323 kubelet[2193]: E1101 00:44:26.605710 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:26.606865 kubelet[2193]: E1101 00:44:26.606704 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lb57m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-699db95d94-lmk4w_calico-apiserver(74931096-7bc0-4134-a8b4-61ec9bf5e338): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:26.608348 kubelet[2193]: E1101 00:44:26.608275 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699db95d94-lmk4w" podUID="74931096-7bc0-4134-a8b4-61ec9bf5e338" Nov 1 00:44:27.396695 env[1301]: time="2025-11-01T00:44:27.396634330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:44:27.607853 env[1301]: time="2025-11-01T00:44:27.607755120Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:27.609721 env[1301]: time="2025-11-01T00:44:27.609639572Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:44:27.610134 kubelet[2193]: E1101 00:44:27.610048 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:44:27.610742 kubelet[2193]: E1101 00:44:27.610143 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:44:27.611277 kubelet[2193]: E1101 00:44:27.611151 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2rjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fvm9x_calico-system(1db94968-800e-4bd7-88c1-2551a090e4ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:27.614646 env[1301]: time="2025-11-01T00:44:27.614579688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:44:27.838548 env[1301]: time="2025-11-01T00:44:27.838454674Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:27.841156 env[1301]: time="2025-11-01T00:44:27.841034098Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:44:27.843506 kubelet[2193]: E1101 00:44:27.842384 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:44:27.843506 kubelet[2193]: E1101 00:44:27.842454 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:44:27.843506 kubelet[2193]: E1101 00:44:27.842646 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2rjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fvm9x_calico-system(1db94968-800e-4bd7-88c1-2551a090e4ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:27.845144 kubelet[2193]: E1101 00:44:27.844999 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fvm9x" podUID="1db94968-800e-4bd7-88c1-2551a090e4ab" Nov 1 00:44:28.393888 env[1301]: time="2025-11-01T00:44:28.393810145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:44:28.586816 env[1301]: time="2025-11-01T00:44:28.586713037Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:28.588660 env[1301]: time="2025-11-01T00:44:28.588556155Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:44:28.589302 kubelet[2193]: E1101 00:44:28.589235 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:28.589504 kubelet[2193]: E1101 00:44:28.589324 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:28.589718 kubelet[2193]: E1101 00:44:28.589640 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fbpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-699db95d94-pf9ws_calico-apiserver(4e92eb00-99ac-4f51-a076-ab8bc59ed374): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:28.591390 env[1301]: time="2025-11-01T00:44:28.590901269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:44:28.591762 kubelet[2193]: E1101 00:44:28.591705 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699db95d94-pf9ws" podUID="4e92eb00-99ac-4f51-a076-ab8bc59ed374" Nov 1 00:44:28.851553 env[1301]: time="2025-11-01T00:44:28.851455903Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:28.853199 env[1301]: time="2025-11-01T00:44:28.853108927Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:44:28.853633 kubelet[2193]: E1101 00:44:28.853553 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:44:28.854265 kubelet[2193]: E1101 00:44:28.853639 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:44:28.854265 kubelet[2193]: E1101 00:44:28.853884 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-phntk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f99bc94f9-dqr4b_calico-system(34b32444-f031-47a6-89b0-97775432ade7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:28.855844 kubelet[2193]: E1101 00:44:28.855783 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f99bc94f9-dqr4b" podUID="34b32444-f031-47a6-89b0-97775432ade7" Nov 1 00:44:31.836439 systemd[1]: Started sshd@7-10.128.0.16:22-139.178.68.195:58662.service. Nov 1 00:44:31.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.16:22-139.178.68.195:58662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:31.864232 kernel: audit: type=1130 audit(1761957871.836:414): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.16:22-139.178.68.195:58662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:32.129000 audit[4873]: USER_ACCT pid=4873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:32.160334 kernel: audit: type=1101 audit(1761957872.129:415): pid=4873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:32.160979 sshd[4873]: Accepted publickey for core from 139.178.68.195 port 58662 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:44:32.160856 sshd[4873]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:32.159000 audit[4873]: CRED_ACQ pid=4873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:32.174948 systemd[1]: Started session-8.scope. Nov 1 00:44:32.176972 systemd-logind[1286]: New session 8 of user core. Nov 1 00:44:32.190488 kernel: audit: type=1103 audit(1761957872.159:416): pid=4873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:32.160000 audit[4873]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc564a4f00 a2=3 a3=0 items=0 ppid=1 pid=4873 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:32.207289 kernel: audit: type=1006 audit(1761957872.160:417): pid=4873 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Nov 1 00:44:32.207367 kernel: audit: type=1300 audit(1761957872.160:417): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc564a4f00 a2=3 a3=0 items=0 ppid=1 pid=4873 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:32.160000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:32.235210 kernel: audit: type=1327 audit(1761957872.160:417): proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:32.188000 audit[4873]: USER_START pid=4873 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:32.276437 kernel: audit: type=1105 audit(1761957872.188:418): pid=4873 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:32.276575 kernel: audit: type=1103 audit(1761957872.196:419): pid=4876 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:32.196000 audit[4876]: CRED_ACQ pid=4876 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:32.541074 sshd[4873]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:32.543000 audit[4873]: USER_END pid=4873 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:32.576512 kernel: audit: type=1106 audit(1761957872.543:420): pid=4873 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:32.543000 audit[4873]: CRED_DISP pid=4873 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:32.580614 systemd[1]: sshd@7-10.128.0.16:22-139.178.68.195:58662.service: Deactivated successfully. Nov 1 00:44:32.582617 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:44:32.587247 systemd-logind[1286]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:44:32.589995 systemd-logind[1286]: Removed session 8. Nov 1 00:44:32.602271 kernel: audit: type=1104 audit(1761957872.543:421): pid=4873 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:32.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.128.0.16:22-139.178.68.195:58662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:36.393253 kubelet[2193]: E1101 00:44:36.393148 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768cf9cc9d-2cqdw" podUID="bae1cc02-5d35-4e6c-8d44-6ad010de9d41" Nov 1 00:44:37.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.16:22-139.178.68.195:43306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:37.585898 systemd[1]: Started sshd@8-10.128.0.16:22-139.178.68.195:43306.service. Nov 1 00:44:37.591911 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:44:37.592056 kernel: audit: type=1130 audit(1761957877.585:423): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.16:22-139.178.68.195:43306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:37.887000 audit[4912]: USER_ACCT pid=4912 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:37.919108 sshd[4912]: Accepted publickey for core from 139.178.68.195 port 43306 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:44:37.919774 kernel: audit: type=1101 audit(1761957877.887:424): pid=4912 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:37.920793 sshd[4912]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:37.919000 audit[4912]: CRED_ACQ pid=4912 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:37.931322 systemd[1]: Started session-9.scope. Nov 1 00:44:37.932501 systemd-logind[1286]: New session 9 of user core. Nov 1 00:44:37.952650 kernel: audit: type=1103 audit(1761957877.919:425): pid=4912 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:37.974217 kernel: audit: type=1006 audit(1761957877.919:426): pid=4912 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Nov 1 00:44:37.919000 audit[4912]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd87329510 a2=3 a3=0 items=0 ppid=1 pid=4912 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:37.919000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:38.012359 kernel: audit: type=1300 audit(1761957877.919:426): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd87329510 a2=3 a3=0 items=0 ppid=1 pid=4912 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:38.012586 kernel: audit: type=1327 audit(1761957877.919:426): proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:38.012635 kernel: audit: type=1105 audit(1761957877.948:427): pid=4912 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:37.948000 audit[4912]: USER_START pid=4912 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:37.954000 audit[4915]: CRED_ACQ pid=4915 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:38.045378 kernel: audit: type=1103 audit(1761957877.954:428): pid=4915 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:38.268553 sshd[4912]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:38.270000 audit[4912]: USER_END pid=4912 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:38.292000 audit[4912]: CRED_DISP pid=4912 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:38.316677 systemd[1]: sshd@8-10.128.0.16:22-139.178.68.195:43306.service: Deactivated successfully. Nov 1 00:44:38.318852 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:44:38.329123 kernel: audit: type=1106 audit(1761957878.270:429): pid=4912 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:38.329406 kernel: audit: type=1104 audit(1761957878.292:430): pid=4912 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:38.330803 systemd-logind[1286]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:44:38.333319 systemd-logind[1286]: Removed session 9. Nov 1 00:44:38.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.128.0.16:22-139.178.68.195:43306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:40.392360 kubelet[2193]: E1101 00:44:40.392299 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699db95d94-lmk4w" podUID="74931096-7bc0-4134-a8b4-61ec9bf5e338" Nov 1 00:44:40.393531 kubelet[2193]: E1101 00:44:40.393487 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f99bc94f9-dqr4b" podUID="34b32444-f031-47a6-89b0-97775432ade7" Nov 1 00:44:40.395381 kubelet[2193]: E1101 00:44:40.395340 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xt7wl" podUID="bb5676df-eb26-4a3d-9a39-dc277ac29b28" Nov 1 00:44:41.395502 kubelet[2193]: E1101 00:44:41.395430 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fvm9x" podUID="1db94968-800e-4bd7-88c1-2551a090e4ab" Nov 1 00:44:41.734935 update_engine[1287]: I1101 00:44:41.734750 1287 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 1 00:44:41.734935 update_engine[1287]: I1101 00:44:41.734817 1287 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 1 00:44:41.736294 update_engine[1287]: I1101 00:44:41.736238 1287 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 1 00:44:41.737093 update_engine[1287]: I1101 00:44:41.737053 1287 omaha_request_params.cc:62] Current group set to lts Nov 1 00:44:41.737733 update_engine[1287]: I1101 00:44:41.737504 1287 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 1 00:44:41.737733 update_engine[1287]: I1101 00:44:41.737525 1287 update_attempter.cc:643] Scheduling an action processor start. Nov 1 00:44:41.737733 update_engine[1287]: I1101 00:44:41.737548 1287 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 1 00:44:41.737733 update_engine[1287]: I1101 00:44:41.737599 1287 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 1 00:44:41.737733 update_engine[1287]: I1101 00:44:41.737681 1287 omaha_request_action.cc:270] Posting an Omaha request to disabled Nov 1 00:44:41.737733 update_engine[1287]: I1101 00:44:41.737689 1287 omaha_request_action.cc:271] Request: Nov 1 00:44:41.737733 update_engine[1287]: Nov 1 00:44:41.737733 update_engine[1287]: Nov 1 00:44:41.737733 update_engine[1287]: Nov 1 00:44:41.737733 update_engine[1287]: Nov 1 00:44:41.737733 update_engine[1287]: Nov 1 00:44:41.737733 update_engine[1287]: Nov 1 00:44:41.737733 update_engine[1287]: Nov 1 00:44:41.737733 update_engine[1287]: Nov 1 00:44:41.737733 update_engine[1287]: I1101 00:44:41.737699 1287 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 00:44:41.739567 update_engine[1287]: I1101 00:44:41.739529 1287 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 00:44:41.739829 update_engine[1287]: I1101 00:44:41.739795 1287 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 00:44:41.740244 locksmithd[1341]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 1 00:44:41.751070 update_engine[1287]: E1101 00:44:41.751019 1287 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 00:44:41.751272 update_engine[1287]: I1101 00:44:41.751221 1287 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 1 00:44:43.329219 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:44:43.329448 kernel: audit: type=1130 audit(1761957883.321:432): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.16:22-139.178.68.195:50022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:43.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.16:22-139.178.68.195:50022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:43.320917 systemd[1]: Started sshd@9-10.128.0.16:22-139.178.68.195:50022.service. Nov 1 00:44:43.395350 kubelet[2193]: E1101 00:44:43.393651 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699db95d94-pf9ws" podUID="4e92eb00-99ac-4f51-a076-ab8bc59ed374" Nov 1 00:44:43.681426 kernel: audit: type=1101 audit(1761957883.649:433): pid=4927 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:43.649000 audit[4927]: USER_ACCT pid=4927 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:43.680818 sshd[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:43.682482 sshd[4927]: Accepted publickey for core from 139.178.68.195 port 50022 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:44:43.679000 audit[4927]: CRED_ACQ pid=4927 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:43.712340 kernel: audit: type=1103 audit(1761957883.679:434): pid=4927 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:43.712533 kernel: audit: type=1006 audit(1761957883.679:435): pid=4927 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Nov 1 00:44:43.710938 systemd[1]: Started session-10.scope. Nov 1 00:44:43.714590 systemd-logind[1286]: New session 10 of user core. Nov 1 00:44:43.679000 audit[4927]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff9f772550 a2=3 a3=0 items=0 ppid=1 pid=4927 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:43.679000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:43.778355 kernel: audit: type=1300 audit(1761957883.679:435): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff9f772550 a2=3 a3=0 items=0 ppid=1 pid=4927 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:43.778566 kernel: audit: type=1327 audit(1761957883.679:435): proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:43.778612 kernel: audit: type=1105 audit(1761957883.733:436): pid=4927 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:43.733000 audit[4927]: USER_START pid=4927 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:43.739000 audit[4930]: CRED_ACQ pid=4930 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:43.835215 kernel: audit: type=1103 audit(1761957883.739:437): pid=4930 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:44.003396 sshd[4927]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:44.006000 audit[4927]: USER_END pid=4927 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:44.040289 kernel: audit: type=1106 audit(1761957884.006:438): pid=4927 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:44.006000 audit[4927]: CRED_DISP pid=4927 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:44.045075 systemd[1]: sshd@9-10.128.0.16:22-139.178.68.195:50022.service: Deactivated successfully. Nov 1 00:44:44.047517 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:44:44.048574 systemd-logind[1286]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:44:44.050624 systemd-logind[1286]: Removed session 10. Nov 1 00:44:44.071881 kernel: audit: type=1104 audit(1761957884.006:439): pid=4927 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:44.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.128.0.16:22-139.178.68.195:50022 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:44.073913 systemd[1]: Started sshd@10-10.128.0.16:22-139.178.68.195:50024.service. Nov 1 00:44:44.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.16:22-139.178.68.195:50024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:44.368000 audit[4940]: USER_ACCT pid=4940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:44.369537 sshd[4940]: Accepted publickey for core from 139.178.68.195 port 50024 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:44:44.371000 audit[4940]: CRED_ACQ pid=4940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:44.371000 audit[4940]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdabc8a9d0 a2=3 a3=0 items=0 ppid=1 pid=4940 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:44.371000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:44.372168 sshd[4940]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:44.379462 systemd-logind[1286]: New session 11 of user core. Nov 1 00:44:44.380500 systemd[1]: Started session-11.scope. Nov 1 00:44:44.393000 audit[4940]: USER_START pid=4940 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:44.398000 audit[4943]: CRED_ACQ pid=4943 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:44.729462 sshd[4940]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:44.732000 audit[4940]: USER_END pid=4940 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:44.732000 audit[4940]: CRED_DISP pid=4940 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:44.736894 systemd[1]: sshd@10-10.128.0.16:22-139.178.68.195:50024.service: Deactivated successfully. Nov 1 00:44:44.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.128.0.16:22-139.178.68.195:50024 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:44.740467 systemd-logind[1286]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:44:44.740728 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:44:44.744589 systemd-logind[1286]: Removed session 11. Nov 1 00:44:44.774896 systemd[1]: Started sshd@11-10.128.0.16:22-139.178.68.195:50032.service. Nov 1 00:44:44.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.16:22-139.178.68.195:50032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:45.071000 audit[4951]: USER_ACCT pid=4951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:45.073315 sshd[4951]: Accepted publickey for core from 139.178.68.195 port 50032 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:44:45.074000 audit[4951]: CRED_ACQ pid=4951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:45.074000 audit[4951]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee2a2a130 a2=3 a3=0 items=0 ppid=1 pid=4951 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:45.074000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:45.075323 sshd[4951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:45.083851 systemd[1]: Started session-12.scope. Nov 1 00:44:45.084488 systemd-logind[1286]: New session 12 of user core. Nov 1 00:44:45.096000 audit[4951]: USER_START pid=4951 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:45.100000 audit[4954]: CRED_ACQ pid=4954 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:45.377134 sshd[4951]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:45.378000 audit[4951]: USER_END pid=4951 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:45.378000 audit[4951]: CRED_DISP pid=4951 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:45.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.128.0.16:22-139.178.68.195:50032 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:45.383269 systemd[1]: sshd@11-10.128.0.16:22-139.178.68.195:50032.service: Deactivated successfully. Nov 1 00:44:45.386893 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:44:45.388272 systemd-logind[1286]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:44:45.394076 systemd-logind[1286]: Removed session 12. Nov 1 00:44:50.426207 systemd[1]: Started sshd@12-10.128.0.16:22-139.178.68.195:50046.service. Nov 1 00:44:50.452352 kernel: kauditd_printk_skb: 23 callbacks suppressed Nov 1 00:44:50.452422 kernel: audit: type=1130 audit(1761957890.425:459): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.16:22-139.178.68.195:50046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:50.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.16:22-139.178.68.195:50046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:50.734000 audit[4969]: USER_ACCT pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:50.766143 sshd[4969]: Accepted publickey for core from 139.178.68.195 port 50046 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:44:50.766709 kernel: audit: type=1101 audit(1761957890.734:460): pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:50.765000 audit[4969]: CRED_ACQ pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:50.768135 sshd[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:50.778567 systemd[1]: Started session-13.scope. Nov 1 00:44:50.780864 systemd-logind[1286]: New session 13 of user core. Nov 1 00:44:50.794302 kernel: audit: type=1103 audit(1761957890.765:461): pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:50.813317 kernel: audit: type=1006 audit(1761957890.766:462): pid=4969 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Nov 1 00:44:50.766000 audit[4969]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe55aa9120 a2=3 a3=0 items=0 ppid=1 pid=4969 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:50.766000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:50.842343 kernel: audit: type=1300 audit(1761957890.766:462): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe55aa9120 a2=3 a3=0 items=0 ppid=1 pid=4969 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:50.842431 kernel: audit: type=1327 audit(1761957890.766:462): proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:50.789000 audit[4969]: USER_START pid=4969 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:50.883324 kernel: audit: type=1105 audit(1761957890.789:463): pid=4969 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:50.883554 kernel: audit: type=1103 audit(1761957890.794:464): pid=4972 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:50.794000 audit[4972]: CRED_ACQ pid=4972 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:51.066514 sshd[4969]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:51.068000 audit[4969]: USER_END pid=4969 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:51.074000 audit[4969]: CRED_DISP pid=4969 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:51.105130 systemd[1]: sshd@12-10.128.0.16:22-139.178.68.195:50046.service: Deactivated successfully. Nov 1 00:44:51.106779 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:44:51.109698 systemd-logind[1286]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:44:51.111835 systemd-logind[1286]: Removed session 13. Nov 1 00:44:51.127078 kernel: audit: type=1106 audit(1761957891.068:465): pid=4969 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:51.127316 kernel: audit: type=1104 audit(1761957891.074:466): pid=4969 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:51.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.128.0.16:22-139.178.68.195:50046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:51.393617 env[1301]: time="2025-11-01T00:44:51.392681218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:44:51.589409 env[1301]: time="2025-11-01T00:44:51.589291482Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:51.591539 env[1301]: time="2025-11-01T00:44:51.591449514Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:44:51.591978 kubelet[2193]: E1101 00:44:51.591896 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:44:51.592900 kubelet[2193]: E1101 00:44:51.592833 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:44:51.593165 kubelet[2193]: E1101 00:44:51.593086 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:082e0ea68660410580a68d5ee8e902f4,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fkph4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-768cf9cc9d-2cqdw_calico-system(bae1cc02-5d35-4e6c-8d44-6ad010de9d41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:51.597200 env[1301]: time="2025-11-01T00:44:51.596521580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:44:51.732402 update_engine[1287]: I1101 00:44:51.732239 1287 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 00:44:51.733055 update_engine[1287]: I1101 00:44:51.732590 1287 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 00:44:51.733055 update_engine[1287]: I1101 00:44:51.732864 1287 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 00:44:51.739354 update_engine[1287]: E1101 00:44:51.739293 1287 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 00:44:51.739531 update_engine[1287]: I1101 00:44:51.739452 1287 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 1 00:44:51.800022 env[1301]: time="2025-11-01T00:44:51.799912326Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:51.802056 env[1301]: time="2025-11-01T00:44:51.801907930Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:44:51.802524 kubelet[2193]: E1101 00:44:51.802439 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:44:51.802827 kubelet[2193]: E1101 00:44:51.802526 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:44:51.802827 kubelet[2193]: E1101 00:44:51.802734 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkph4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-768cf9cc9d-2cqdw_calico-system(bae1cc02-5d35-4e6c-8d44-6ad010de9d41): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:51.807494 kubelet[2193]: E1101 00:44:51.804545 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768cf9cc9d-2cqdw" podUID="bae1cc02-5d35-4e6c-8d44-6ad010de9d41" Nov 1 00:44:53.393386 env[1301]: time="2025-11-01T00:44:53.392898875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:44:53.593607 env[1301]: time="2025-11-01T00:44:53.593515995Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:53.595479 env[1301]: time="2025-11-01T00:44:53.595393299Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:44:53.595824 kubelet[2193]: E1101 00:44:53.595758 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:44:53.596469 kubelet[2193]: E1101 00:44:53.595845 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:44:53.596469 kubelet[2193]: E1101 00:44:53.596125 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-phntk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-f99bc94f9-dqr4b_calico-system(34b32444-f031-47a6-89b0-97775432ade7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:53.598011 kubelet[2193]: E1101 00:44:53.597963 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f99bc94f9-dqr4b" podUID="34b32444-f031-47a6-89b0-97775432ade7" Nov 1 00:44:54.392314 env[1301]: time="2025-11-01T00:44:54.391540867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:44:54.593012 env[1301]: time="2025-11-01T00:44:54.592914100Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:54.594779 env[1301]: time="2025-11-01T00:44:54.594690768Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:44:54.595110 kubelet[2193]: E1101 00:44:54.595048 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:44:54.595273 kubelet[2193]: E1101 00:44:54.595135 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:44:54.595443 kubelet[2193]: E1101 00:44:54.595373 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2rjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fvm9x_calico-system(1db94968-800e-4bd7-88c1-2551a090e4ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:54.599414 env[1301]: time="2025-11-01T00:44:54.598998981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:44:54.797404 env[1301]: time="2025-11-01T00:44:54.797322451Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:54.799391 env[1301]: time="2025-11-01T00:44:54.799305296Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:44:54.800025 kubelet[2193]: E1101 00:44:54.799933 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:44:54.800655 kubelet[2193]: E1101 00:44:54.800047 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:44:54.800655 kubelet[2193]: E1101 00:44:54.800289 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2rjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-fvm9x_calico-system(1db94968-800e-4bd7-88c1-2551a090e4ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:54.802334 kubelet[2193]: E1101 00:44:54.802261 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fvm9x" podUID="1db94968-800e-4bd7-88c1-2551a090e4ab" Nov 1 00:44:55.392916 env[1301]: time="2025-11-01T00:44:55.392842785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:44:55.633637 env[1301]: time="2025-11-01T00:44:55.633513387Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:55.635456 env[1301]: time="2025-11-01T00:44:55.635360760Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:44:55.635908 kubelet[2193]: E1101 00:44:55.635845 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:55.636143 kubelet[2193]: E1101 00:44:55.636107 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:55.636742 env[1301]: time="2025-11-01T00:44:55.636699591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:44:55.637264 kubelet[2193]: E1101 00:44:55.637045 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lb57m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-699db95d94-lmk4w_calico-apiserver(74931096-7bc0-4134-a8b4-61ec9bf5e338): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:55.638969 kubelet[2193]: E1101 00:44:55.638914 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699db95d94-lmk4w" podUID="74931096-7bc0-4134-a8b4-61ec9bf5e338" Nov 1 00:44:55.852060 env[1301]: time="2025-11-01T00:44:55.851954343Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:55.853692 env[1301]: time="2025-11-01T00:44:55.853603733Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:44:55.854678 kubelet[2193]: E1101 00:44:55.854211 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:44:55.854678 kubelet[2193]: E1101 00:44:55.854307 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:44:55.854678 kubelet[2193]: E1101 00:44:55.854546 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qm7px,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xt7wl_calico-system(bb5676df-eb26-4a3d-9a39-dc277ac29b28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:55.855860 kubelet[2193]: E1101 00:44:55.855795 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xt7wl" podUID="bb5676df-eb26-4a3d-9a39-dc277ac29b28" Nov 1 00:44:56.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.16:22-139.178.68.195:59500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.117531 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:44:56.117628 kernel: audit: type=1130 audit(1761957896.109:468): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.16:22-139.178.68.195:59500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.110356 systemd[1]: Started sshd@13-10.128.0.16:22-139.178.68.195:59500.service. Nov 1 00:44:56.439000 audit[4989]: USER_ACCT pid=4989 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:56.470557 sshd[4989]: Accepted publickey for core from 139.178.68.195 port 59500 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:44:56.471660 kernel: audit: type=1101 audit(1761957896.439:469): pid=4989 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:56.471335 sshd[4989]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:56.469000 audit[4989]: CRED_ACQ pid=4989 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:56.486553 systemd[1]: Started session-14.scope. Nov 1 00:44:56.487947 systemd-logind[1286]: New session 14 of user core. Nov 1 00:44:56.502895 kernel: audit: type=1103 audit(1761957896.469:470): pid=4989 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:56.523259 kernel: audit: type=1006 audit(1761957896.469:471): pid=4989 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Nov 1 00:44:56.469000 audit[4989]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd8788cea0 a2=3 a3=0 items=0 ppid=1 pid=4989 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:56.469000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:56.560740 kernel: audit: type=1300 audit(1761957896.469:471): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd8788cea0 a2=3 a3=0 items=0 ppid=1 pid=4989 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:56.560903 kernel: audit: type=1327 audit(1761957896.469:471): proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:56.560949 kernel: audit: type=1105 audit(1761957896.493:472): pid=4989 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:56.493000 audit[4989]: USER_START pid=4989 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:56.499000 audit[4991]: CRED_ACQ pid=4991 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:56.618073 kernel: audit: type=1103 audit(1761957896.499:473): pid=4991 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:56.768924 sshd[4989]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:56.770000 audit[4989]: USER_END pid=4989 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:56.804345 kernel: audit: type=1106 audit(1761957896.770:474): pid=4989 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:56.770000 audit[4989]: CRED_DISP pid=4989 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:56.806318 systemd[1]: sshd@13-10.128.0.16:22-139.178.68.195:59500.service: Deactivated successfully. Nov 1 00:44:56.808925 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:44:56.809748 systemd-logind[1286]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:44:56.811926 systemd-logind[1286]: Removed session 14. Nov 1 00:44:56.829279 kernel: audit: type=1104 audit(1761957896.770:475): pid=4989 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:44:56.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.128.0.16:22-139.178.68.195:59500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:58.392717 env[1301]: time="2025-11-01T00:44:58.392187439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:44:58.567506 env[1301]: time="2025-11-01T00:44:58.567419983Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:58.569768 env[1301]: time="2025-11-01T00:44:58.569685641Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:44:58.570482 kubelet[2193]: E1101 00:44:58.570400 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:58.571085 kubelet[2193]: E1101 00:44:58.570505 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:58.571085 kubelet[2193]: E1101 00:44:58.570904 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fbpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-699db95d94-pf9ws_calico-apiserver(4e92eb00-99ac-4f51-a076-ab8bc59ed374): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:58.572311 kubelet[2193]: E1101 00:44:58.572262 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699db95d94-pf9ws" podUID="4e92eb00-99ac-4f51-a076-ab8bc59ed374" Nov 1 00:45:01.737041 update_engine[1287]: I1101 00:45:01.736258 1287 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 00:45:01.737041 update_engine[1287]: I1101 00:45:01.736694 1287 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 00:45:01.737041 update_engine[1287]: I1101 00:45:01.736976 1287 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 00:45:01.748071 update_engine[1287]: E1101 00:45:01.747844 1287 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 00:45:01.748071 update_engine[1287]: I1101 00:45:01.748027 1287 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 1 00:45:01.817679 systemd[1]: Started sshd@14-10.128.0.16:22-139.178.68.195:59506.service. Nov 1 00:45:01.849535 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:45:01.849816 kernel: audit: type=1130 audit(1761957901.818:477): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.16:22-139.178.68.195:59506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:01.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.16:22-139.178.68.195:59506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:02.161000 audit[5005]: USER_ACCT pid=5005 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:02.193222 kernel: audit: type=1101 audit(1761957902.161:478): pid=5005 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:02.197328 sshd[5005]: Accepted publickey for core from 139.178.68.195 port 59506 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:45:02.198640 sshd[5005]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:45:02.212288 systemd-logind[1286]: New session 15 of user core. Nov 1 00:45:02.214075 systemd[1]: Started session-15.scope. Nov 1 00:45:02.196000 audit[5005]: CRED_ACQ pid=5005 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:02.259450 kernel: audit: type=1103 audit(1761957902.196:479): pid=5005 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:02.307210 kernel: audit: type=1006 audit(1761957902.196:480): pid=5005 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Nov 1 00:45:02.196000 audit[5005]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7b8738f0 a2=3 a3=0 items=0 ppid=1 pid=5005 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:02.196000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:45:02.356986 kernel: audit: type=1300 audit(1761957902.196:480): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7b8738f0 a2=3 a3=0 items=0 ppid=1 pid=5005 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:02.357245 kernel: audit: type=1327 audit(1761957902.196:480): proctitle=737368643A20636F7265205B707269765D Nov 1 00:45:02.228000 audit[5005]: USER_START pid=5005 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:02.232000 audit[5008]: CRED_ACQ pid=5008 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:02.430010 kernel: audit: type=1105 audit(1761957902.228:481): pid=5005 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:02.430301 kernel: audit: type=1103 audit(1761957902.232:482): pid=5008 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:02.647868 sshd[5005]: pam_unix(sshd:session): session closed for user core Nov 1 00:45:02.648000 audit[5005]: USER_END pid=5005 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:02.677300 systemd[1]: sshd@14-10.128.0.16:22-139.178.68.195:59506.service: Deactivated successfully. Nov 1 00:45:02.680105 systemd-logind[1286]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:45:02.681593 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:45:02.683747 systemd-logind[1286]: Removed session 15. Nov 1 00:45:02.685214 kernel: audit: type=1106 audit(1761957902.648:483): pid=5005 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:02.648000 audit[5005]: CRED_DISP pid=5005 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:02.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.128.0.16:22-139.178.68.195:59506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:02.717312 kernel: audit: type=1104 audit(1761957902.648:484): pid=5005 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:03.408748 kubelet[2193]: E1101 00:45:03.408653 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768cf9cc9d-2cqdw" podUID="bae1cc02-5d35-4e6c-8d44-6ad010de9d41" Nov 1 00:45:05.397495 kubelet[2193]: E1101 00:45:05.397431 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f99bc94f9-dqr4b" podUID="34b32444-f031-47a6-89b0-97775432ade7" Nov 1 00:45:06.882999 systemd[1]: run-containerd-runc-k8s.io-2af32c85892ea59a0f32d6a0e075814b8e2f843effb3d9a803bf09d2a2ad08ef-runc.jzQ0wg.mount: Deactivated successfully. Nov 1 00:45:07.402203 kubelet[2193]: E1101 00:45:07.394436 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fvm9x" podUID="1db94968-800e-4bd7-88c1-2551a090e4ab" Nov 1 00:45:07.699055 systemd[1]: Started sshd@15-10.128.0.16:22-139.178.68.195:35752.service. Nov 1 00:45:07.716399 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:45:07.716625 kernel: audit: type=1130 audit(1761957907.698:486): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.16:22-139.178.68.195:35752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:07.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.16:22-139.178.68.195:35752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:08.051000 audit[5038]: USER_ACCT pid=5038 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:08.083288 kernel: audit: type=1101 audit(1761957908.051:487): pid=5038 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:08.087206 sshd[5038]: Accepted publickey for core from 139.178.68.195 port 35752 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:45:08.088409 sshd[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:45:08.086000 audit[5038]: CRED_ACQ pid=5038 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:08.123305 kernel: audit: type=1103 audit(1761957908.086:488): pid=5038 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:08.130149 systemd[1]: Started session-16.scope. Nov 1 00:45:08.131825 systemd-logind[1286]: New session 16 of user core. Nov 1 00:45:08.147431 kernel: audit: type=1006 audit(1761957908.086:489): pid=5038 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Nov 1 00:45:08.149239 kernel: audit: type=1300 audit(1761957908.086:489): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc2c8bdc10 a2=3 a3=0 items=0 ppid=1 pid=5038 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:08.086000 audit[5038]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc2c8bdc10 a2=3 a3=0 items=0 ppid=1 pid=5038 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:08.086000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:45:08.222205 kernel: audit: type=1327 audit(1761957908.086:489): proctitle=737368643A20636F7265205B707269765D Nov 1 00:45:08.160000 audit[5038]: USER_START pid=5038 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:08.262222 kernel: audit: type=1105 audit(1761957908.160:490): pid=5038 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:08.262464 kernel: audit: type=1103 audit(1761957908.160:491): pid=5041 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:08.160000 audit[5041]: CRED_ACQ pid=5041 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:08.392055 kubelet[2193]: E1101 00:45:08.391858 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699db95d94-lmk4w" podUID="74931096-7bc0-4134-a8b4-61ec9bf5e338" Nov 1 00:45:08.520535 sshd[5038]: pam_unix(sshd:session): session closed for user core Nov 1 00:45:08.521000 audit[5038]: USER_END pid=5038 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:08.527592 systemd-logind[1286]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:45:08.530671 systemd[1]: sshd@15-10.128.0.16:22-139.178.68.195:35752.service: Deactivated successfully. Nov 1 00:45:08.532339 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:45:08.534697 systemd-logind[1286]: Removed session 16. Nov 1 00:45:08.555213 kernel: audit: type=1106 audit(1761957908.521:492): pid=5038 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:08.571374 systemd[1]: Started sshd@16-10.128.0.16:22-139.178.68.195:35766.service. Nov 1 00:45:08.521000 audit[5038]: CRED_DISP pid=5038 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:08.611539 kernel: audit: type=1104 audit(1761957908.521:493): pid=5038 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:08.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.128.0.16:22-139.178.68.195:35752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:08.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.16:22-139.178.68.195:35766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:08.908000 audit[5051]: USER_ACCT pid=5051 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:08.911818 sshd[5051]: Accepted publickey for core from 139.178.68.195 port 35766 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:45:08.911000 audit[5051]: CRED_ACQ pid=5051 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:08.911000 audit[5051]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff758dca50 a2=3 a3=0 items=0 ppid=1 pid=5051 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:08.911000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:45:08.913076 sshd[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:45:08.927900 systemd-logind[1286]: New session 17 of user core. Nov 1 00:45:08.929265 systemd[1]: Started session-17.scope. Nov 1 00:45:08.942000 audit[5051]: USER_START pid=5051 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:08.945000 audit[5054]: CRED_ACQ pid=5054 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:09.493536 sshd[5051]: pam_unix(sshd:session): session closed for user core Nov 1 00:45:09.494000 audit[5051]: USER_END pid=5051 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:09.495000 audit[5051]: CRED_DISP pid=5051 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:09.500314 systemd-logind[1286]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:45:09.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.128.0.16:22-139.178.68.195:35766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:09.503281 systemd[1]: sshd@16-10.128.0.16:22-139.178.68.195:35766.service: Deactivated successfully. Nov 1 00:45:09.504892 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:45:09.508577 systemd-logind[1286]: Removed session 17. Nov 1 00:45:09.541063 systemd[1]: Started sshd@17-10.128.0.16:22-139.178.68.195:35772.service. Nov 1 00:45:09.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.16:22-139.178.68.195:35772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:09.869000 audit[5062]: USER_ACCT pid=5062 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:09.872423 sshd[5062]: Accepted publickey for core from 139.178.68.195 port 35772 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:45:09.871000 audit[5062]: CRED_ACQ pid=5062 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:09.871000 audit[5062]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5ea34aa0 a2=3 a3=0 items=0 ppid=1 pid=5062 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:09.871000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:45:09.873601 sshd[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:45:09.891886 systemd-logind[1286]: New session 18 of user core. Nov 1 00:45:09.894294 systemd[1]: Started session-18.scope. Nov 1 00:45:09.909000 audit[5062]: USER_START pid=5062 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:09.912000 audit[5065]: CRED_ACQ pid=5065 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:10.392803 kubelet[2193]: E1101 00:45:10.392734 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699db95d94-pf9ws" podUID="4e92eb00-99ac-4f51-a076-ab8bc59ed374" Nov 1 00:45:10.393925 kubelet[2193]: E1101 00:45:10.392734 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xt7wl" podUID="bb5676df-eb26-4a3d-9a39-dc277ac29b28" Nov 1 00:45:11.148589 sshd[5062]: pam_unix(sshd:session): session closed for user core Nov 1 00:45:11.151000 audit[5062]: USER_END pid=5062 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:11.151000 audit[5062]: CRED_DISP pid=5062 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:11.157010 systemd-logind[1286]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:45:11.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.128.0.16:22-139.178.68.195:35772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:11.160925 systemd[1]: sshd@17-10.128.0.16:22-139.178.68.195:35772.service: Deactivated successfully. Nov 1 00:45:11.162510 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:45:11.168366 systemd-logind[1286]: Removed session 18. Nov 1 00:45:11.193837 systemd[1]: Started sshd@18-10.128.0.16:22-139.178.68.195:35786.service. Nov 1 00:45:11.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.16:22-139.178.68.195:35786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:11.213000 audit[5078]: NETFILTER_CFG table=filter:127 family=2 entries=14 op=nft_register_rule pid=5078 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:11.213000 audit[5078]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe4bb88990 a2=0 a3=7ffe4bb8897c items=0 ppid=2355 pid=5078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:11.213000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:11.227000 audit[5078]: NETFILTER_CFG table=nat:128 family=2 entries=20 op=nft_register_rule pid=5078 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:11.227000 audit[5078]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe4bb88990 a2=0 a3=7ffe4bb8897c items=0 ppid=2355 pid=5078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:11.227000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:11.250000 audit[5081]: NETFILTER_CFG table=filter:129 family=2 entries=26 op=nft_register_rule pid=5081 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:11.250000 audit[5081]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffeef5bc7f0 a2=0 a3=7ffeef5bc7dc items=0 ppid=2355 pid=5081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:11.250000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:11.257000 audit[5081]: NETFILTER_CFG table=nat:130 family=2 entries=20 op=nft_register_rule pid=5081 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:11.257000 audit[5081]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffeef5bc7f0 a2=0 a3=0 items=0 ppid=2355 pid=5081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:11.257000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:11.515000 audit[5077]: USER_ACCT pid=5077 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:11.518247 sshd[5077]: Accepted publickey for core from 139.178.68.195 port 35786 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:45:11.517000 audit[5077]: CRED_ACQ pid=5077 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:11.519000 audit[5077]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce7d292f0 a2=3 a3=0 items=0 ppid=1 pid=5077 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:11.519000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:45:11.520992 sshd[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:45:11.535522 systemd[1]: Started session-19.scope. Nov 1 00:45:11.536263 systemd-logind[1286]: New session 19 of user core. Nov 1 00:45:11.555000 audit[5077]: USER_START pid=5077 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:11.559000 audit[5083]: CRED_ACQ pid=5083 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:11.732822 update_engine[1287]: I1101 00:45:11.731952 1287 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 00:45:11.732822 update_engine[1287]: I1101 00:45:11.732448 1287 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 00:45:11.732822 update_engine[1287]: I1101 00:45:11.732749 1287 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 00:45:11.771832 update_engine[1287]: E1101 00:45:11.770292 1287 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 00:45:11.771832 update_engine[1287]: I1101 00:45:11.770502 1287 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 1 00:45:11.771832 update_engine[1287]: I1101 00:45:11.770522 1287 omaha_request_action.cc:621] Omaha request response: Nov 1 00:45:11.771832 update_engine[1287]: E1101 00:45:11.770706 1287 omaha_request_action.cc:640] Omaha request network transfer failed. Nov 1 00:45:11.771832 update_engine[1287]: I1101 00:45:11.770735 1287 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Nov 1 00:45:11.771832 update_engine[1287]: I1101 00:45:11.770745 1287 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 1 00:45:11.771832 update_engine[1287]: I1101 00:45:11.770754 1287 update_attempter.cc:306] Processing Done. Nov 1 00:45:11.771832 update_engine[1287]: E1101 00:45:11.770778 1287 update_attempter.cc:619] Update failed. Nov 1 00:45:11.771832 update_engine[1287]: I1101 00:45:11.770789 1287 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Nov 1 00:45:11.771832 update_engine[1287]: I1101 00:45:11.770799 1287 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Nov 1 00:45:11.771832 update_engine[1287]: I1101 00:45:11.770810 1287 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Nov 1 00:45:11.771832 update_engine[1287]: I1101 00:45:11.770953 1287 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 1 00:45:11.771832 update_engine[1287]: I1101 00:45:11.770998 1287 omaha_request_action.cc:270] Posting an Omaha request to disabled Nov 1 00:45:11.771832 update_engine[1287]: I1101 00:45:11.771008 1287 omaha_request_action.cc:271] Request: Nov 1 00:45:11.771832 update_engine[1287]: Nov 1 00:45:11.771832 update_engine[1287]: Nov 1 00:45:11.772951 update_engine[1287]: Nov 1 00:45:11.772951 update_engine[1287]: Nov 1 00:45:11.772951 update_engine[1287]: Nov 1 00:45:11.772951 update_engine[1287]: Nov 1 00:45:11.772951 update_engine[1287]: I1101 00:45:11.771019 1287 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 00:45:11.772951 update_engine[1287]: I1101 00:45:11.771371 1287 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 00:45:11.772951 update_engine[1287]: I1101 00:45:11.771667 1287 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 00:45:11.778248 locksmithd[1341]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Nov 1 00:45:11.780195 update_engine[1287]: E1101 00:45:11.779776 1287 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 00:45:11.780195 update_engine[1287]: I1101 00:45:11.779958 1287 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 1 00:45:11.780195 update_engine[1287]: I1101 00:45:11.779973 1287 omaha_request_action.cc:621] Omaha request response: Nov 1 00:45:11.780195 update_engine[1287]: I1101 00:45:11.779986 1287 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 1 00:45:11.780195 update_engine[1287]: I1101 00:45:11.779995 1287 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 1 00:45:11.780195 update_engine[1287]: I1101 00:45:11.780004 1287 update_attempter.cc:306] Processing Done. Nov 1 00:45:11.780195 update_engine[1287]: I1101 00:45:11.780016 1287 update_attempter.cc:310] Error event sent. Nov 1 00:45:11.780195 update_engine[1287]: I1101 00:45:11.780032 1287 update_check_scheduler.cc:74] Next update check in 44m1s Nov 1 00:45:11.781304 locksmithd[1341]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Nov 1 00:45:12.171552 sshd[5077]: pam_unix(sshd:session): session closed for user core Nov 1 00:45:12.173000 audit[5077]: USER_END pid=5077 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:12.174000 audit[5077]: CRED_DISP pid=5077 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:12.180259 systemd-logind[1286]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:45:12.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.128.0.16:22-139.178.68.195:35786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:12.183772 systemd[1]: sshd@18-10.128.0.16:22-139.178.68.195:35786.service: Deactivated successfully. Nov 1 00:45:12.185656 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:45:12.193463 systemd-logind[1286]: Removed session 19. Nov 1 00:45:12.218531 systemd[1]: Started sshd@19-10.128.0.16:22-139.178.68.195:35790.service. Nov 1 00:45:12.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.16:22-139.178.68.195:35790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:12.539000 audit[5091]: USER_ACCT pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:12.542585 sshd[5091]: Accepted publickey for core from 139.178.68.195 port 35790 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:45:12.541000 audit[5091]: CRED_ACQ pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:12.542000 audit[5091]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca50ff190 a2=3 a3=0 items=0 ppid=1 pid=5091 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:12.542000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:45:12.543789 sshd[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:45:12.558513 systemd-logind[1286]: New session 20 of user core. Nov 1 00:45:12.560130 systemd[1]: Started session-20.scope. Nov 1 00:45:12.569000 audit[5091]: USER_START pid=5091 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:12.573000 audit[5094]: CRED_ACQ pid=5094 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:12.930521 sshd[5091]: pam_unix(sshd:session): session closed for user core Nov 1 00:45:12.940260 kernel: kauditd_printk_skb: 54 callbacks suppressed Nov 1 00:45:12.940446 kernel: audit: type=1106 audit(1761957912.931:532): pid=5091 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:12.931000 audit[5091]: USER_END pid=5091 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:12.935949 systemd[1]: sshd@19-10.128.0.16:22-139.178.68.195:35790.service: Deactivated successfully. Nov 1 00:45:12.938278 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:45:12.972187 systemd-logind[1286]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:45:12.974838 systemd-logind[1286]: Removed session 20. Nov 1 00:45:12.931000 audit[5091]: CRED_DISP pid=5091 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:13.031733 kernel: audit: type=1104 audit(1761957912.931:533): pid=5091 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:13.031974 kernel: audit: type=1131 audit(1761957912.931:534): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.16:22-139.178.68.195:35790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:12.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.128.0.16:22-139.178.68.195:35790 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:13.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.16:22-193.46.255.159:64122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:13.218329 systemd[1]: Started sshd@20-10.128.0.16:22-193.46.255.159:64122.service. Nov 1 00:45:13.244484 kernel: audit: type=1130 audit(1761957913.217:535): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.16:22-193.46.255.159:64122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:14.202000 audit[5104]: USER_AUTH pid=5104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=193.46.255.159 addr=193.46.255.159 terminal=ssh res=failed' Nov 1 00:45:14.229233 kernel: audit: type=1100 audit(1761957914.202:536): pid=5104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=193.46.255.159 addr=193.46.255.159 terminal=ssh res=failed' Nov 1 00:45:14.229314 sshd[5104]: Failed password for root from 193.46.255.159 port 64122 ssh2 Nov 1 00:45:14.352000 audit[5104]: USER_AUTH pid=5104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=193.46.255.159 addr=193.46.255.159 terminal=ssh res=failed' Nov 1 00:45:14.378421 kernel: audit: type=1100 audit(1761957914.352:537): pid=5104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=193.46.255.159 addr=193.46.255.159 terminal=ssh res=failed' Nov 1 00:45:14.378524 sshd[5104]: Failed password for root from 193.46.255.159 port 64122 ssh2 Nov 1 00:45:14.502000 audit[5104]: USER_AUTH pid=5104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=193.46.255.159 addr=193.46.255.159 terminal=ssh res=failed' Nov 1 00:45:14.528621 sshd[5104]: Failed password for root from 193.46.255.159 port 64122 ssh2 Nov 1 00:45:14.529213 kernel: audit: type=1100 audit(1761957914.502:538): pid=5104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=193.46.255.159 addr=193.46.255.159 terminal=ssh res=failed' Nov 1 00:45:14.695839 sshd[5104]: Received disconnect from 193.46.255.159 port 64122:11: [preauth] Nov 1 00:45:14.696107 sshd[5104]: Disconnected from authenticating user root 193.46.255.159 port 64122 [preauth] Nov 1 00:45:14.698376 systemd[1]: sshd@20-10.128.0.16:22-193.46.255.159:64122.service: Deactivated successfully. Nov 1 00:45:14.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.16:22-193.46.255.159:64122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:14.725217 kernel: audit: type=1131 audit(1761957914.697:539): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.128.0.16:22-193.46.255.159:64122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:14.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.16:22-193.46.255.159:64136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:14.854489 systemd[1]: Started sshd@21-10.128.0.16:22-193.46.255.159:64136.service. Nov 1 00:45:14.880411 kernel: audit: type=1130 audit(1761957914.854:540): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.16:22-193.46.255.159:64136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:15.855000 audit[5108]: USER_AUTH pid=5108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=193.46.255.159 addr=193.46.255.159 terminal=ssh res=failed' Nov 1 00:45:15.876632 sshd[5108]: Failed password for root from 193.46.255.159 port 64136 ssh2 Nov 1 00:45:15.882265 kernel: audit: type=1100 audit(1761957915.855:541): pid=5108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=193.46.255.159 addr=193.46.255.159 terminal=ssh res=failed' Nov 1 00:45:16.009000 audit[5108]: USER_AUTH pid=5108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=193.46.255.159 addr=193.46.255.159 terminal=ssh res=failed' Nov 1 00:45:16.011335 sshd[5108]: Failed password for root from 193.46.255.159 port 64136 ssh2 Nov 1 00:45:16.163000 audit[5108]: USER_AUTH pid=5108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=193.46.255.159 addr=193.46.255.159 terminal=ssh res=failed' Nov 1 00:45:16.165432 sshd[5108]: Failed password for root from 193.46.255.159 port 64136 ssh2 Nov 1 00:45:16.315901 sshd[5108]: Received disconnect from 193.46.255.159 port 64136:11: [preauth] Nov 1 00:45:16.316249 sshd[5108]: Disconnected from authenticating user root 193.46.255.159 port 64136 [preauth] Nov 1 00:45:16.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.128.0.16:22-193.46.255.159:64136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:16.318798 systemd[1]: sshd@21-10.128.0.16:22-193.46.255.159:64136.service: Deactivated successfully. Nov 1 00:45:16.394251 kubelet[2193]: E1101 00:45:16.394160 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768cf9cc9d-2cqdw" podUID="bae1cc02-5d35-4e6c-8d44-6ad010de9d41" Nov 1 00:45:16.470812 systemd[1]: Started sshd@22-10.128.0.16:22-193.46.255.159:64144.service. Nov 1 00:45:16.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.16:22-193.46.255.159:64144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:17.490000 audit[5112]: USER_AUTH pid=5112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=193.46.255.159 addr=193.46.255.159 terminal=ssh res=failed' Nov 1 00:45:17.491531 sshd[5112]: Failed password for root from 193.46.255.159 port 64144 ssh2 Nov 1 00:45:17.644000 audit[5112]: USER_AUTH pid=5112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=193.46.255.159 addr=193.46.255.159 terminal=ssh res=failed' Nov 1 00:45:17.645372 sshd[5112]: Failed password for root from 193.46.255.159 port 64144 ssh2 Nov 1 00:45:17.798980 sshd[5112]: Failed password for root from 193.46.255.159 port 64144 ssh2 Nov 1 00:45:17.798000 audit[5112]: USER_AUTH pid=5112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:authentication grantors=? acct="root" exe="/usr/sbin/sshd" hostname=193.46.255.159 addr=193.46.255.159 terminal=ssh res=failed' Nov 1 00:45:17.847000 audit[5118]: NETFILTER_CFG table=filter:131 family=2 entries=26 op=nft_register_rule pid=5118 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:17.847000 audit[5118]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc52048430 a2=0 a3=7ffc5204841c items=0 ppid=2355 pid=5118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:17.847000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:17.855000 audit[5118]: NETFILTER_CFG table=nat:132 family=2 entries=104 op=nft_register_chain pid=5118 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:45:17.855000 audit[5118]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc52048430 a2=0 a3=7ffc5204841c items=0 ppid=2355 pid=5118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:17.855000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:45:17.951200 sshd[5112]: Received disconnect from 193.46.255.159 port 64144:11: [preauth] Nov 1 00:45:17.951200 sshd[5112]: Disconnected from authenticating user root 193.46.255.159 port 64144 [preauth] Nov 1 00:45:17.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.16:22-193.46.255.159:64144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:17.953250 systemd[1]: sshd@22-10.128.0.16:22-193.46.255.159:64144.service: Deactivated successfully. Nov 1 00:45:17.960228 kernel: kauditd_printk_skb: 13 callbacks suppressed Nov 1 00:45:17.960360 kernel: audit: type=1131 audit(1761957917.953:551): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.128.0.16:22-193.46.255.159:64144 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:17.985533 systemd[1]: Started sshd@23-10.128.0.16:22-139.178.68.195:58544.service. Nov 1 00:45:17.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.16:22-139.178.68.195:58544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:18.022238 kernel: audit: type=1130 audit(1761957917.985:552): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.16:22-139.178.68.195:58544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:18.315000 audit[5121]: USER_ACCT pid=5121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:18.345914 sshd[5121]: Accepted publickey for core from 139.178.68.195 port 58544 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:45:18.346522 kernel: audit: type=1101 audit(1761957918.315:553): pid=5121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:18.350048 sshd[5121]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:45:18.348000 audit[5121]: CRED_ACQ pid=5121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:18.377575 kernel: audit: type=1103 audit(1761957918.348:554): pid=5121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:18.377218 systemd[1]: Started session-21.scope. Nov 1 00:45:18.377984 systemd-logind[1286]: New session 21 of user core. Nov 1 00:45:18.403238 kernel: audit: type=1006 audit(1761957918.349:555): pid=5121 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Nov 1 00:45:18.349000 audit[5121]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc128a61e0 a2=3 a3=0 items=0 ppid=1 pid=5121 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:18.433311 kernel: audit: type=1300 audit(1761957918.349:555): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc128a61e0 a2=3 a3=0 items=0 ppid=1 pid=5121 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:18.349000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:45:18.433000 audit[5121]: USER_START pid=5121 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:18.476212 kernel: audit: type=1327 audit(1761957918.349:555): proctitle=737368643A20636F7265205B707269765D Nov 1 00:45:18.476481 kernel: audit: type=1105 audit(1761957918.433:556): pid=5121 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:18.443000 audit[5125]: CRED_ACQ pid=5125 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:18.500966 kernel: audit: type=1103 audit(1761957918.443:557): pid=5125 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:18.810536 sshd[5121]: pam_unix(sshd:session): session closed for user core Nov 1 00:45:18.812000 audit[5121]: USER_END pid=5121 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:18.816981 systemd-logind[1286]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:45:18.819601 systemd[1]: sshd@23-10.128.0.16:22-139.178.68.195:58544.service: Deactivated successfully. Nov 1 00:45:18.821156 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:45:18.823142 systemd-logind[1286]: Removed session 21. Nov 1 00:45:18.812000 audit[5121]: CRED_DISP pid=5121 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:18.846327 kernel: audit: type=1106 audit(1761957918.812:558): pid=5121 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:18.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.128.0.16:22-139.178.68.195:58544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:20.391955 kubelet[2193]: E1101 00:45:20.391886 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f99bc94f9-dqr4b" podUID="34b32444-f031-47a6-89b0-97775432ade7" Nov 1 00:45:20.393540 kubelet[2193]: E1101 00:45:20.393463 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fvm9x" podUID="1db94968-800e-4bd7-88c1-2551a090e4ab" Nov 1 00:45:21.392724 kubelet[2193]: E1101 00:45:21.392490 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699db95d94-pf9ws" podUID="4e92eb00-99ac-4f51-a076-ab8bc59ed374" Nov 1 00:45:21.393570 kubelet[2193]: E1101 00:45:21.393088 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699db95d94-lmk4w" podUID="74931096-7bc0-4134-a8b4-61ec9bf5e338" Nov 1 00:45:23.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.128.0.16:22-139.178.68.195:56184 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:23.856797 systemd[1]: Started sshd@24-10.128.0.16:22-139.178.68.195:56184.service. Nov 1 00:45:23.862602 kernel: kauditd_printk_skb: 2 callbacks suppressed Nov 1 00:45:23.862731 kernel: audit: type=1130 audit(1761957923.856:561): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.128.0.16:22-139.178.68.195:56184 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:24.173000 audit[5134]: USER_ACCT pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:24.206330 kernel: audit: type=1101 audit(1761957924.173:562): pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:24.208631 sshd[5134]: Accepted publickey for core from 139.178.68.195 port 56184 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:45:24.210856 sshd[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:45:24.208000 audit[5134]: CRED_ACQ pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:24.238285 kernel: audit: type=1103 audit(1761957924.208:563): pid=5134 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:24.250006 systemd[1]: Started session-22.scope. Nov 1 00:45:24.261200 kernel: audit: type=1006 audit(1761957924.208:564): pid=5134 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Nov 1 00:45:24.261507 systemd-logind[1286]: New session 22 of user core. Nov 1 00:45:24.208000 audit[5134]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca7191560 a2=3 a3=0 items=0 ppid=1 pid=5134 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:24.302370 kernel: audit: type=1300 audit(1761957924.208:564): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca7191560 a2=3 a3=0 items=0 ppid=1 pid=5134 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:24.314450 kernel: audit: type=1327 audit(1761957924.208:564): proctitle=737368643A20636F7265205B707269765D Nov 1 00:45:24.208000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:45:24.324000 audit[5134]: USER_START pid=5134 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:24.359559 kernel: audit: type=1105 audit(1761957924.324:565): pid=5134 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:24.359000 audit[5137]: CRED_ACQ pid=5137 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:24.386053 kernel: audit: type=1103 audit(1761957924.359:566): pid=5137 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:24.801106 sshd[5134]: pam_unix(sshd:session): session closed for user core Nov 1 00:45:24.801000 audit[5134]: USER_END pid=5134 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:24.836196 kernel: audit: type=1106 audit(1761957924.801:567): pid=5134 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:24.802000 audit[5134]: CRED_DISP pid=5134 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:24.837729 systemd[1]: sshd@24-10.128.0.16:22-139.178.68.195:56184.service: Deactivated successfully. Nov 1 00:45:24.839414 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:45:24.851769 systemd-logind[1286]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:45:24.854309 systemd-logind[1286]: Removed session 22. Nov 1 00:45:24.861212 kernel: audit: type=1104 audit(1761957924.802:568): pid=5134 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:24.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.128.0.16:22-139.178.68.195:56184 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:25.392156 kubelet[2193]: E1101 00:45:25.392086 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xt7wl" podUID="bb5676df-eb26-4a3d-9a39-dc277ac29b28" Nov 1 00:45:28.393586 kubelet[2193]: E1101 00:45:28.393518 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-768cf9cc9d-2cqdw" podUID="bae1cc02-5d35-4e6c-8d44-6ad010de9d41" Nov 1 00:45:29.877199 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:45:29.877426 kernel: audit: type=1130 audit(1761957929.844:570): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.128.0.16:22-139.178.68.195:56198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:29.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.128.0.16:22-139.178.68.195:56198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:29.845724 systemd[1]: Started sshd@25-10.128.0.16:22-139.178.68.195:56198.service. Nov 1 00:45:30.182000 audit[5154]: USER_ACCT pid=5154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:30.214354 kernel: audit: type=1101 audit(1761957930.182:571): pid=5154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:30.224210 sshd[5154]: Accepted publickey for core from 139.178.68.195 port 56198 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:45:30.224809 sshd[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:45:30.222000 audit[5154]: CRED_ACQ pid=5154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:30.245577 systemd[1]: Started session-23.scope. Nov 1 00:45:30.247261 systemd-logind[1286]: New session 23 of user core. Nov 1 00:45:30.253393 kernel: audit: type=1103 audit(1761957930.222:572): pid=5154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:30.278147 kernel: audit: type=1006 audit(1761957930.222:573): pid=5154 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Nov 1 00:45:30.222000 audit[5154]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf954d550 a2=3 a3=0 items=0 ppid=1 pid=5154 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:30.222000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:45:30.316379 kernel: audit: type=1300 audit(1761957930.222:573): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf954d550 a2=3 a3=0 items=0 ppid=1 pid=5154 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:30.316584 kernel: audit: type=1327 audit(1761957930.222:573): proctitle=737368643A20636F7265205B707269765D Nov 1 00:45:30.262000 audit[5154]: USER_START pid=5154 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:30.349246 kernel: audit: type=1105 audit(1761957930.262:574): pid=5154 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:30.351221 kernel: audit: type=1103 audit(1761957930.290:575): pid=5157 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:30.290000 audit[5157]: CRED_ACQ pid=5157 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:30.602410 sshd[5154]: pam_unix(sshd:session): session closed for user core Nov 1 00:45:30.603000 audit[5154]: USER_END pid=5154 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:30.640218 kernel: audit: type=1106 audit(1761957930.603:576): pid=5154 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:30.640483 systemd[1]: sshd@25-10.128.0.16:22-139.178.68.195:56198.service: Deactivated successfully. Nov 1 00:45:30.643256 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:45:30.643290 systemd-logind[1286]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:45:30.646423 systemd-logind[1286]: Removed session 23. Nov 1 00:45:30.603000 audit[5154]: CRED_DISP pid=5154 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:30.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.128.0.16:22-139.178.68.195:56198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:30.680491 kernel: audit: type=1104 audit(1761957930.603:577): pid=5154 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:31.397734 kubelet[2193]: E1101 00:45:31.397675 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-f99bc94f9-dqr4b" podUID="34b32444-f031-47a6-89b0-97775432ade7" Nov 1 00:45:31.402550 kubelet[2193]: E1101 00:45:31.402412 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-fvm9x" podUID="1db94968-800e-4bd7-88c1-2551a090e4ab" Nov 1 00:45:32.393119 kubelet[2193]: E1101 00:45:32.393052 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699db95d94-pf9ws" podUID="4e92eb00-99ac-4f51-a076-ab8bc59ed374" Nov 1 00:45:32.393710 kubelet[2193]: E1101 00:45:32.393665 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-699db95d94-lmk4w" podUID="74931096-7bc0-4134-a8b4-61ec9bf5e338" Nov 1 00:45:35.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.128.0.16:22-139.178.68.195:33342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:35.650311 systemd[1]: Started sshd@26-10.128.0.16:22-139.178.68.195:33342.service. Nov 1 00:45:35.656146 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:45:35.656353 kernel: audit: type=1130 audit(1761957935.649:579): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.128.0.16:22-139.178.68.195:33342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:36.002000 audit[5168]: USER_ACCT pid=5168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:36.034342 kernel: audit: type=1101 audit(1761957936.002:580): pid=5168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:36.036630 sshd[5168]: Accepted publickey for core from 139.178.68.195 port 33342 ssh2: RSA SHA256:GSqF/4F3rRKdKeqeDHvdnEOSnHTK3+r0cz3SPwoprYw Nov 1 00:45:36.040215 sshd[5168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:45:36.053406 systemd-logind[1286]: New session 24 of user core. Nov 1 00:45:36.055012 systemd[1]: Started session-24.scope. Nov 1 00:45:36.036000 audit[5168]: CRED_ACQ pid=5168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:36.117320 kernel: audit: type=1103 audit(1761957936.036:581): pid=5168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:36.165326 kernel: audit: type=1006 audit(1761957936.037:582): pid=5168 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Nov 1 00:45:36.037000 audit[5168]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb1e449c0 a2=3 a3=0 items=0 ppid=1 pid=5168 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:36.037000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:45:36.221801 kernel: audit: type=1300 audit(1761957936.037:582): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb1e449c0 a2=3 a3=0 items=0 ppid=1 pid=5168 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:45:36.222038 kernel: audit: type=1327 audit(1761957936.037:582): proctitle=737368643A20636F7265205B707269765D Nov 1 00:45:36.072000 audit[5168]: USER_START pid=5168 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:36.272341 kernel: audit: type=1105 audit(1761957936.072:583): pid=5168 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:36.075000 audit[5171]: CRED_ACQ pid=5171 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:36.312209 kernel: audit: type=1103 audit(1761957936.075:584): pid=5171 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:36.392426 env[1301]: time="2025-11-01T00:45:36.392357087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:45:36.506031 sshd[5168]: pam_unix(sshd:session): session closed for user core Nov 1 00:45:36.507000 audit[5168]: USER_END pid=5168 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:36.541691 kernel: audit: type=1106 audit(1761957936.507:585): pid=5168 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:36.507000 audit[5168]: CRED_DISP pid=5168 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:36.558391 systemd[1]: sshd@26-10.128.0.16:22-139.178.68.195:33342.service: Deactivated successfully. Nov 1 00:45:36.576071 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:45:36.582927 systemd-logind[1286]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:45:36.583295 kernel: audit: type=1104 audit(1761957936.507:586): pid=5168 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Nov 1 00:45:36.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.128.0.16:22-139.178.68.195:33342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:45:36.588886 systemd-logind[1286]: Removed session 24. Nov 1 00:45:36.592225 env[1301]: time="2025-11-01T00:45:36.591881353Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:45:36.593808 env[1301]: time="2025-11-01T00:45:36.593600345Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:45:36.594307 kubelet[2193]: E1101 00:45:36.594239 2193 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:45:36.595029 kubelet[2193]: E1101 00:45:36.594980 2193 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:45:36.595597 kubelet[2193]: E1101 00:45:36.595511 2193 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qm7px,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xt7wl_calico-system(bb5676df-eb26-4a3d-9a39-dc277ac29b28): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:45:36.598414 kubelet[2193]: E1101 00:45:36.598351 2193 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xt7wl" podUID="bb5676df-eb26-4a3d-9a39-dc277ac29b28"