Nov 1 00:15:51.002252 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 1 00:15:51.002270 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Oct 31 23:12:38 -00 2025 Nov 1 00:15:51.002277 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Nov 1 00:15:51.002285 kernel: printk: bootconsole [pl11] enabled Nov 1 00:15:51.002290 kernel: efi: EFI v2.70 by EDK II Nov 1 00:15:51.002295 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3761cf98 Nov 1 00:15:51.002301 kernel: random: crng init done Nov 1 00:15:51.002307 kernel: ACPI: Early table checksum verification disabled Nov 1 00:15:51.002312 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Nov 1 00:15:51.002318 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:15:51.002323 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:15:51.002328 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Nov 1 00:15:51.002335 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:15:51.002341 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:15:51.002347 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:15:51.002353 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:15:51.002359 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:15:51.002366 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:15:51.002372 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Nov 1 00:15:51.002377 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:15:51.002383 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Nov 1 00:15:51.002389 kernel: NUMA: Failed to initialise from firmware Nov 1 00:15:51.002394 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Nov 1 00:15:51.002400 kernel: NUMA: NODE_DATA [mem 0x1bf7f1900-0x1bf7f6fff] Nov 1 00:15:51.002405 kernel: Zone ranges: Nov 1 00:15:51.002411 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Nov 1 00:15:51.002417 kernel: DMA32 empty Nov 1 00:15:51.002422 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Nov 1 00:15:51.002429 kernel: Movable zone start for each node Nov 1 00:15:51.002435 kernel: Early memory node ranges Nov 1 00:15:51.002440 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Nov 1 00:15:51.002446 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Nov 1 00:15:51.002452 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Nov 1 00:15:51.002457 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Nov 1 00:15:51.002463 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Nov 1 00:15:51.002468 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Nov 1 00:15:51.002474 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Nov 1 00:15:51.002480 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Nov 1 00:15:51.002485 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Nov 1 00:15:51.002491 kernel: psci: probing for conduit method from ACPI. Nov 1 00:15:51.002500 kernel: psci: PSCIv1.1 detected in firmware. Nov 1 00:15:51.002506 kernel: psci: Using standard PSCI v0.2 function IDs Nov 1 00:15:51.002512 kernel: psci: MIGRATE_INFO_TYPE not supported. Nov 1 00:15:51.002518 kernel: psci: SMC Calling Convention v1.4 Nov 1 00:15:51.002524 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Nov 1 00:15:51.002531 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Nov 1 00:15:51.002537 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Nov 1 00:15:51.002543 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Nov 1 00:15:51.002549 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 1 00:15:51.002555 kernel: Detected PIPT I-cache on CPU0 Nov 1 00:15:51.002561 kernel: CPU features: detected: GIC system register CPU interface Nov 1 00:15:51.002567 kernel: CPU features: detected: Hardware dirty bit management Nov 1 00:15:51.002573 kernel: CPU features: detected: Spectre-BHB Nov 1 00:15:51.002579 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 1 00:15:51.002585 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 1 00:15:51.002591 kernel: CPU features: detected: ARM erratum 1418040 Nov 1 00:15:51.002599 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Nov 1 00:15:51.002605 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 1 00:15:51.002611 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Nov 1 00:15:51.002616 kernel: Policy zone: Normal Nov 1 00:15:51.002624 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=284392058f112e827cd7c521dcce1be27e1367d0030df494642d12e41e342e29 Nov 1 00:15:51.002630 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 00:15:51.002636 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:15:51.002642 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:15:51.002648 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:15:51.002654 kernel: software IO TLB: mapped [mem 0x000000003a550000-0x000000003e550000] (64MB) Nov 1 00:15:51.002661 kernel: Memory: 3986872K/4194160K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 207288K reserved, 0K cma-reserved) Nov 1 00:15:51.002668 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:15:51.002674 kernel: trace event string verifier disabled Nov 1 00:15:51.002680 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:15:51.002686 kernel: rcu: RCU event tracing is enabled. Nov 1 00:15:51.002693 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:15:51.002699 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:15:51.002705 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:15:51.002711 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:15:51.002717 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:15:51.005749 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 1 00:15:51.005768 kernel: GICv3: 960 SPIs implemented Nov 1 00:15:51.005778 kernel: GICv3: 0 Extended SPIs implemented Nov 1 00:15:51.005785 kernel: GICv3: Distributor has no Range Selector support Nov 1 00:15:51.005791 kernel: Root IRQ handler: gic_handle_irq Nov 1 00:15:51.005797 kernel: GICv3: 16 PPIs implemented Nov 1 00:15:51.005803 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Nov 1 00:15:51.005809 kernel: ITS: No ITS available, not enabling LPIs Nov 1 00:15:51.005815 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 1 00:15:51.005821 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 1 00:15:51.005828 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 1 00:15:51.005834 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 1 00:15:51.005840 kernel: Console: colour dummy device 80x25 Nov 1 00:15:51.005848 kernel: printk: console [tty1] enabled Nov 1 00:15:51.005855 kernel: ACPI: Core revision 20210730 Nov 1 00:15:51.005861 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 1 00:15:51.005868 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:15:51.005874 kernel: LSM: Security Framework initializing Nov 1 00:15:51.005880 kernel: SELinux: Initializing. Nov 1 00:15:51.005886 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:15:51.005893 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:15:51.005899 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Nov 1 00:15:51.005906 kernel: Hyper-V: Host Build 10.0.22477.1619-1-0 Nov 1 00:15:51.005912 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:15:51.005918 kernel: Remapping and enabling EFI services. Nov 1 00:15:51.005925 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:15:51.005931 kernel: Detected PIPT I-cache on CPU1 Nov 1 00:15:51.005937 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Nov 1 00:15:51.005944 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 1 00:15:51.005950 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 1 00:15:51.005956 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:15:51.005962 kernel: SMP: Total of 2 processors activated. Nov 1 00:15:51.005970 kernel: CPU features: detected: 32-bit EL0 Support Nov 1 00:15:51.005976 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Nov 1 00:15:51.005983 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 1 00:15:51.005989 kernel: CPU features: detected: CRC32 instructions Nov 1 00:15:51.005995 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 1 00:15:51.006002 kernel: CPU features: detected: LSE atomic instructions Nov 1 00:15:51.006008 kernel: CPU features: detected: Privileged Access Never Nov 1 00:15:51.006014 kernel: CPU: All CPU(s) started at EL1 Nov 1 00:15:51.006020 kernel: alternatives: patching kernel code Nov 1 00:15:51.006027 kernel: devtmpfs: initialized Nov 1 00:15:51.006038 kernel: KASLR enabled Nov 1 00:15:51.006045 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:15:51.006053 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:15:51.006060 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:15:51.006066 kernel: SMBIOS 3.1.0 present. Nov 1 00:15:51.006073 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Nov 1 00:15:51.006079 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:15:51.006086 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 1 00:15:51.006094 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 1 00:15:51.006100 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 1 00:15:51.006107 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:15:51.006113 kernel: audit: type=2000 audit(0.085:1): state=initialized audit_enabled=0 res=1 Nov 1 00:15:51.006120 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:15:51.006126 kernel: cpuidle: using governor menu Nov 1 00:15:51.006133 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 1 00:15:51.006141 kernel: ASID allocator initialised with 32768 entries Nov 1 00:15:51.006148 kernel: ACPI: bus type PCI registered Nov 1 00:15:51.006155 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:15:51.006161 kernel: Serial: AMBA PL011 UART driver Nov 1 00:15:51.006168 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:15:51.006174 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Nov 1 00:15:51.006181 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:15:51.006187 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Nov 1 00:15:51.006194 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:15:51.006202 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 1 00:15:51.006208 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:15:51.006215 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:15:51.006221 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:15:51.006227 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 00:15:51.006234 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 00:15:51.006240 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 00:15:51.006247 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:15:51.006253 kernel: ACPI: Interpreter enabled Nov 1 00:15:51.006261 kernel: ACPI: Using GIC for interrupt routing Nov 1 00:15:51.006268 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Nov 1 00:15:51.006274 kernel: printk: console [ttyAMA0] enabled Nov 1 00:15:51.006281 kernel: printk: bootconsole [pl11] disabled Nov 1 00:15:51.006287 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Nov 1 00:15:51.006294 kernel: iommu: Default domain type: Translated Nov 1 00:15:51.006300 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 1 00:15:51.006306 kernel: vgaarb: loaded Nov 1 00:15:51.006313 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:15:51.006320 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:15:51.006327 kernel: PTP clock support registered Nov 1 00:15:51.006334 kernel: Registered efivars operations Nov 1 00:15:51.006340 kernel: No ACPI PMU IRQ for CPU0 Nov 1 00:15:51.006347 kernel: No ACPI PMU IRQ for CPU1 Nov 1 00:15:51.006353 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 1 00:15:51.006359 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:15:51.006366 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:15:51.006372 kernel: pnp: PnP ACPI init Nov 1 00:15:51.006379 kernel: pnp: PnP ACPI: found 0 devices Nov 1 00:15:51.006387 kernel: NET: Registered PF_INET protocol family Nov 1 00:15:51.006393 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:15:51.006400 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 00:15:51.006407 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:15:51.006413 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:15:51.006420 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Nov 1 00:15:51.006427 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 00:15:51.006433 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:15:51.006441 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:15:51.006448 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:15:51.006454 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:15:51.006461 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Nov 1 00:15:51.006467 kernel: kvm [1]: HYP mode not available Nov 1 00:15:51.006474 kernel: Initialise system trusted keyrings Nov 1 00:15:51.006480 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 00:15:51.006487 kernel: Key type asymmetric registered Nov 1 00:15:51.006493 kernel: Asymmetric key parser 'x509' registered Nov 1 00:15:51.006501 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 00:15:51.006507 kernel: io scheduler mq-deadline registered Nov 1 00:15:51.006514 kernel: io scheduler kyber registered Nov 1 00:15:51.006520 kernel: io scheduler bfq registered Nov 1 00:15:51.006526 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:15:51.006533 kernel: thunder_xcv, ver 1.0 Nov 1 00:15:51.006539 kernel: thunder_bgx, ver 1.0 Nov 1 00:15:51.006546 kernel: nicpf, ver 1.0 Nov 1 00:15:51.006552 kernel: nicvf, ver 1.0 Nov 1 00:15:51.006665 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 1 00:15:51.006742 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-01T00:15:50 UTC (1761956150) Nov 1 00:15:51.006754 kernel: efifb: probing for efifb Nov 1 00:15:51.006760 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 1 00:15:51.006767 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 1 00:15:51.006774 kernel: efifb: scrolling: redraw Nov 1 00:15:51.006780 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 1 00:15:51.006787 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 00:15:51.006796 kernel: fb0: EFI VGA frame buffer device Nov 1 00:15:51.006802 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Nov 1 00:15:51.006809 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 00:15:51.006815 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:15:51.006822 kernel: Segment Routing with IPv6 Nov 1 00:15:51.006828 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:15:51.006835 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:15:51.006841 kernel: Key type dns_resolver registered Nov 1 00:15:51.006848 kernel: registered taskstats version 1 Nov 1 00:15:51.006854 kernel: Loading compiled-in X.509 certificates Nov 1 00:15:51.006863 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 4aa5071b9a6f96878595e36d4bd5862a671c915d' Nov 1 00:15:51.006869 kernel: Key type .fscrypt registered Nov 1 00:15:51.006876 kernel: Key type fscrypt-provisioning registered Nov 1 00:15:51.006882 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:15:51.006889 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:15:51.006895 kernel: ima: No architecture policies found Nov 1 00:15:51.006902 kernel: clk: Disabling unused clocks Nov 1 00:15:51.006908 kernel: Freeing unused kernel memory: 36416K Nov 1 00:15:51.006916 kernel: Run /init as init process Nov 1 00:15:51.006923 kernel: with arguments: Nov 1 00:15:51.006929 kernel: /init Nov 1 00:15:51.006935 kernel: with environment: Nov 1 00:15:51.006942 kernel: HOME=/ Nov 1 00:15:51.006948 kernel: TERM=linux Nov 1 00:15:51.006955 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 00:15:51.006963 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:15:51.006973 systemd[1]: Detected virtualization microsoft. Nov 1 00:15:51.006981 systemd[1]: Detected architecture arm64. Nov 1 00:15:51.006988 systemd[1]: Running in initrd. Nov 1 00:15:51.006995 systemd[1]: No hostname configured, using default hostname. Nov 1 00:15:51.007001 systemd[1]: Hostname set to . Nov 1 00:15:51.007009 systemd[1]: Initializing machine ID from random generator. Nov 1 00:15:51.007016 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:15:51.007023 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:15:51.007031 systemd[1]: Reached target cryptsetup.target. Nov 1 00:15:51.007038 systemd[1]: Reached target paths.target. Nov 1 00:15:51.007045 systemd[1]: Reached target slices.target. Nov 1 00:15:51.007052 systemd[1]: Reached target swap.target. Nov 1 00:15:51.007059 systemd[1]: Reached target timers.target. Nov 1 00:15:51.007066 systemd[1]: Listening on iscsid.socket. Nov 1 00:15:51.007073 systemd[1]: Listening on iscsiuio.socket. Nov 1 00:15:51.007080 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:15:51.007089 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:15:51.007096 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:15:51.007103 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:15:51.007110 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:15:51.007117 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:15:51.007124 systemd[1]: Reached target sockets.target. Nov 1 00:15:51.007131 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:15:51.007138 systemd[1]: Finished network-cleanup.service. Nov 1 00:15:51.007145 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:15:51.007154 systemd[1]: Starting systemd-journald.service... Nov 1 00:15:51.007161 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:15:51.007168 systemd[1]: Starting systemd-resolved.service... Nov 1 00:15:51.007178 systemd-journald[276]: Journal started Nov 1 00:15:51.007217 systemd-journald[276]: Runtime Journal (/run/log/journal/a2112c25892c483dad3433e2b64534e5) is 8.0M, max 78.5M, 70.5M free. Nov 1 00:15:50.999823 systemd-modules-load[277]: Inserted module 'overlay' Nov 1 00:15:51.025942 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 00:15:51.038739 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:15:51.051890 systemd[1]: Started systemd-journald.service. Nov 1 00:15:51.051931 kernel: Bridge firewalling registered Nov 1 00:15:51.052007 systemd-modules-load[277]: Inserted module 'br_netfilter' Nov 1 00:15:51.071529 kernel: audit: type=1130 audit(1761956151.051:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:51.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:51.052405 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:15:51.058004 systemd-resolved[278]: Positive Trust Anchors: Nov 1 00:15:51.105839 kernel: audit: type=1130 audit(1761956151.084:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:51.105864 kernel: SCSI subsystem initialized Nov 1 00:15:51.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:51.058013 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:15:51.144842 kernel: audit: type=1130 audit(1761956151.109:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:51.144865 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:15:51.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:51.058039 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:15:51.208060 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:15:51.208082 kernel: audit: type=1130 audit(1761956151.148:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:51.208092 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 00:15:51.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:51.060142 systemd-resolved[278]: Defaulting to hostname 'linux'. Nov 1 00:15:51.234180 kernel: audit: type=1130 audit(1761956151.212:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:51.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:51.084827 systemd[1]: Started systemd-resolved.service. Nov 1 00:15:51.134364 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:15:51.169500 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 00:15:51.288901 kernel: audit: type=1130 audit(1761956151.262:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:51.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:51.229805 systemd[1]: Reached target nss-lookup.target. Nov 1 00:15:51.239305 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 00:15:51.323381 kernel: audit: type=1130 audit(1761956151.298:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:51.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:51.246847 systemd-modules-load[277]: Inserted module 'dm_multipath' Nov 1 00:15:51.247538 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:15:51.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:51.257922 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:15:51.377332 kernel: audit: type=1130 audit(1761956151.323:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:51.377358 kernel: audit: type=1130 audit(1761956151.332:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:51.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:51.280326 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:15:51.294282 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 00:15:51.299315 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:15:51.323822 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:15:51.395713 dracut-cmdline[298]: dracut-dracut-053 Nov 1 00:15:51.395713 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=284392058f112e827cd7c521dcce1be27e1367d0030df494642d12e41e342e29 Nov 1 00:15:51.333738 systemd[1]: Starting dracut-cmdline.service... Nov 1 00:15:51.456740 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:15:51.469747 kernel: iscsi: registered transport (tcp) Nov 1 00:15:51.490129 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:15:51.490176 kernel: QLogic iSCSI HBA Driver Nov 1 00:15:51.518930 systemd[1]: Finished dracut-cmdline.service. Nov 1 00:15:51.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:51.524473 systemd[1]: Starting dracut-pre-udev.service... Nov 1 00:15:51.575743 kernel: raid6: neonx8 gen() 13813 MB/s Nov 1 00:15:51.595743 kernel: raid6: neonx8 xor() 10836 MB/s Nov 1 00:15:51.617735 kernel: raid6: neonx4 gen() 13541 MB/s Nov 1 00:15:51.637734 kernel: raid6: neonx4 xor() 11090 MB/s Nov 1 00:15:51.657737 kernel: raid6: neonx2 gen() 13012 MB/s Nov 1 00:15:51.679734 kernel: raid6: neonx2 xor() 10400 MB/s Nov 1 00:15:51.715731 kernel: raid6: neonx1 gen() 10529 MB/s Nov 1 00:15:51.726745 kernel: raid6: neonx1 xor() 8791 MB/s Nov 1 00:15:51.740744 kernel: raid6: int64x8 gen() 6269 MB/s Nov 1 00:15:51.761734 kernel: raid6: int64x8 xor() 3545 MB/s Nov 1 00:15:51.781737 kernel: raid6: int64x4 gen() 7199 MB/s Nov 1 00:15:51.802735 kernel: raid6: int64x4 xor() 3856 MB/s Nov 1 00:15:51.823733 kernel: raid6: int64x2 gen() 6155 MB/s Nov 1 00:15:51.843737 kernel: raid6: int64x2 xor() 3323 MB/s Nov 1 00:15:51.864739 kernel: raid6: int64x1 gen() 5046 MB/s Nov 1 00:15:51.889300 kernel: raid6: int64x1 xor() 2648 MB/s Nov 1 00:15:51.889331 kernel: raid6: using algorithm neonx8 gen() 13813 MB/s Nov 1 00:15:51.889348 kernel: raid6: .... xor() 10836 MB/s, rmw enabled Nov 1 00:15:51.893538 kernel: raid6: using neon recovery algorithm Nov 1 00:15:51.915450 kernel: xor: measuring software checksum speed Nov 1 00:15:51.915462 kernel: 8regs : 17188 MB/sec Nov 1 00:15:51.919884 kernel: 32regs : 20697 MB/sec Nov 1 00:15:51.923859 kernel: arm64_neon : 27955 MB/sec Nov 1 00:15:51.923869 kernel: xor: using function: arm64_neon (27955 MB/sec) Nov 1 00:15:51.983748 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Nov 1 00:15:51.992773 systemd[1]: Finished dracut-pre-udev.service. Nov 1 00:15:51.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:52.001000 audit: BPF prog-id=7 op=LOAD Nov 1 00:15:52.001000 audit: BPF prog-id=8 op=LOAD Nov 1 00:15:52.001659 systemd[1]: Starting systemd-udevd.service... Nov 1 00:15:52.016004 systemd-udevd[474]: Using default interface naming scheme 'v252'. Nov 1 00:15:52.022813 systemd[1]: Started systemd-udevd.service. Nov 1 00:15:52.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:52.032571 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 00:15:52.045260 dracut-pre-trigger[495]: rd.md=0: removing MD RAID activation Nov 1 00:15:52.069421 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 00:15:52.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:52.074931 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:15:52.111975 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:15:52.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:52.159742 kernel: hv_vmbus: Vmbus version:5.3 Nov 1 00:15:52.166741 kernel: hv_vmbus: registering driver hid_hyperv Nov 1 00:15:52.178754 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Nov 1 00:15:52.178781 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 1 00:15:52.179747 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 1 00:15:52.180744 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Nov 1 00:15:52.212751 kernel: hv_vmbus: registering driver hv_netvsc Nov 1 00:15:52.212803 kernel: hv_vmbus: registering driver hv_storvsc Nov 1 00:15:52.227862 kernel: scsi host0: storvsc_host_t Nov 1 00:15:52.228078 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 1 00:15:52.234891 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Nov 1 00:15:52.238178 kernel: scsi host1: storvsc_host_t Nov 1 00:15:52.262483 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 1 00:15:52.273820 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 00:15:52.273842 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 1 00:15:52.297437 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 1 00:15:52.297535 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 1 00:15:52.297613 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 1 00:15:52.297689 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 1 00:15:52.297784 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 1 00:15:52.297878 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:15:52.297895 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 1 00:15:52.323759 kernel: hv_netvsc 000d3a07-5314-000d-3a07-5314000d3a07 eth0: VF slot 1 added Nov 1 00:15:52.331777 kernel: hv_vmbus: registering driver hv_pci Nov 1 00:15:52.340741 kernel: hv_pci 8ffe1c7d-765d-46a1-ac9c-222388bf44b7: PCI VMBus probing: Using version 0x10004 Nov 1 00:15:52.419770 kernel: hv_pci 8ffe1c7d-765d-46a1-ac9c-222388bf44b7: PCI host bridge to bus 765d:00 Nov 1 00:15:52.419884 kernel: pci_bus 765d:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Nov 1 00:15:52.419990 kernel: pci_bus 765d:00: No busn resource found for root bus, will use [bus 00-ff] Nov 1 00:15:52.420061 kernel: pci 765d:00:02.0: [15b3:1018] type 00 class 0x020000 Nov 1 00:15:52.420150 kernel: pci 765d:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Nov 1 00:15:52.420226 kernel: pci 765d:00:02.0: enabling Extended Tags Nov 1 00:15:52.420300 kernel: pci 765d:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 765d:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Nov 1 00:15:52.420374 kernel: pci_bus 765d:00: busn_res: [bus 00-ff] end is updated to 00 Nov 1 00:15:52.420443 kernel: pci 765d:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Nov 1 00:15:52.457492 kernel: mlx5_core 765d:00:02.0: enabling device (0000 -> 0002) Nov 1 00:15:52.682019 kernel: mlx5_core 765d:00:02.0: firmware version: 16.30.1284 Nov 1 00:15:52.682132 kernel: mlx5_core 765d:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Nov 1 00:15:52.682211 kernel: hv_netvsc 000d3a07-5314-000d-3a07-5314000d3a07 eth0: VF registering: eth1 Nov 1 00:15:52.682290 kernel: mlx5_core 765d:00:02.0 eth1: joined to eth0 Nov 1 00:15:52.691746 kernel: mlx5_core 765d:00:02.0 enP30301s1: renamed from eth1 Nov 1 00:15:52.745753 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (542) Nov 1 00:15:52.758023 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:15:52.773673 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 00:15:53.003459 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 00:15:53.039245 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 00:15:53.045654 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 00:15:53.060344 systemd[1]: Starting disk-uuid.service... Nov 1 00:15:53.085756 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:15:53.095747 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:15:54.111138 disk-uuid[604]: The operation has completed successfully. Nov 1 00:15:54.116289 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:15:54.180753 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:15:54.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:54.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:54.180848 systemd[1]: Finished disk-uuid.service. Nov 1 00:15:54.193512 systemd[1]: Starting verity-setup.service... Nov 1 00:15:54.235747 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 1 00:15:54.607925 systemd[1]: Found device dev-mapper-usr.device. Nov 1 00:15:54.614123 systemd[1]: Mounting sysusr-usr.mount... Nov 1 00:15:54.625600 systemd[1]: Finished verity-setup.service. Nov 1 00:15:54.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:54.689757 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 00:15:54.690094 systemd[1]: Mounted sysusr-usr.mount. Nov 1 00:15:54.694225 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 00:15:54.694971 systemd[1]: Starting ignition-setup.service... Nov 1 00:15:54.702402 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 00:15:54.750655 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 1 00:15:54.750707 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:15:54.750717 kernel: BTRFS info (device sda6): has skinny extents Nov 1 00:15:54.793136 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 00:15:54.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:54.802000 audit: BPF prog-id=9 op=LOAD Nov 1 00:15:54.803165 systemd[1]: Starting systemd-networkd.service... Nov 1 00:15:54.828086 systemd-networkd[868]: lo: Link UP Nov 1 00:15:54.828097 systemd-networkd[868]: lo: Gained carrier Nov 1 00:15:54.828500 systemd-networkd[868]: Enumeration completed Nov 1 00:15:54.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:54.832203 systemd-networkd[868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:15:54.833811 systemd[1]: Started systemd-networkd.service. Nov 1 00:15:54.840460 systemd[1]: Reached target network.target. Nov 1 00:15:54.855612 systemd[1]: Starting iscsiuio.service... Nov 1 00:15:54.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:54.864005 systemd[1]: Started iscsiuio.service. Nov 1 00:15:54.884678 iscsid[873]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:15:54.884678 iscsid[873]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Nov 1 00:15:54.884678 iscsid[873]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 00:15:54.884678 iscsid[873]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 00:15:54.884678 iscsid[873]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 00:15:54.884678 iscsid[873]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:15:54.884678 iscsid[873]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 00:15:54.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:54.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:54.872922 systemd[1]: Starting iscsid.service... Nov 1 00:15:54.888362 systemd[1]: Started iscsid.service. Nov 1 00:15:54.901361 systemd[1]: Starting dracut-initqueue.service... Nov 1 00:15:54.953915 systemd[1]: Finished dracut-initqueue.service. Nov 1 00:15:54.966901 systemd[1]: Reached target remote-fs-pre.target. Nov 1 00:15:54.979685 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:15:55.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:55.029099 kernel: kauditd_printk_skb: 16 callbacks suppressed Nov 1 00:15:55.029117 kernel: audit: type=1130 audit(1761956155.018:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:54.988262 systemd[1]: Reached target remote-fs.target. Nov 1 00:15:54.996985 systemd[1]: Starting dracut-pre-mount.service... Nov 1 00:15:55.014449 systemd[1]: Finished dracut-pre-mount.service. Nov 1 00:15:55.039261 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:15:55.067639 kernel: mlx5_core 765d:00:02.0 enP30301s1: Link up Nov 1 00:15:55.067806 kernel: buffer_size[0]=0 is not enough for lossless buffer Nov 1 00:15:55.120787 kernel: hv_netvsc 000d3a07-5314-000d-3a07-5314000d3a07 eth0: Data path switched to VF: enP30301s1 Nov 1 00:15:55.121000 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:15:55.121685 systemd-networkd[868]: enP30301s1: Link UP Nov 1 00:15:55.121957 systemd-networkd[868]: eth0: Link UP Nov 1 00:15:55.122299 systemd-networkd[868]: eth0: Gained carrier Nov 1 00:15:55.134189 systemd-networkd[868]: enP30301s1: Gained carrier Nov 1 00:15:55.146787 systemd-networkd[868]: eth0: DHCPv4 address 10.200.20.42/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 1 00:15:55.418830 systemd[1]: Finished ignition-setup.service. Nov 1 00:15:55.445841 kernel: audit: type=1130 audit(1761956155.423:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:55.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:15:55.424275 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 00:15:56.422911 systemd-networkd[868]: eth0: Gained IPv6LL Nov 1 00:15:59.637447 ignition[895]: Ignition 2.14.0 Nov 1 00:15:59.637459 ignition[895]: Stage: fetch-offline Nov 1 00:15:59.637511 ignition[895]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:15:59.637533 ignition[895]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:15:59.994761 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:16:00.000798 ignition[895]: parsed url from cmdline: "" Nov 1 00:16:00.000803 ignition[895]: no config URL provided Nov 1 00:16:00.000809 ignition[895]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:16:00.000819 ignition[895]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:16:00.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:00.005245 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 00:16:00.043105 kernel: audit: type=1130 audit(1761956160.015:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:00.000825 ignition[895]: failed to fetch config: resource requires networking Nov 1 00:16:00.016549 systemd[1]: Starting ignition-fetch.service... Nov 1 00:16:00.001060 ignition[895]: Ignition finished successfully Nov 1 00:16:00.033489 ignition[901]: Ignition 2.14.0 Nov 1 00:16:00.033507 ignition[901]: Stage: fetch Nov 1 00:16:00.033613 ignition[901]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:16:00.033634 ignition[901]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:16:00.036366 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:16:00.036490 ignition[901]: parsed url from cmdline: "" Nov 1 00:16:00.036493 ignition[901]: no config URL provided Nov 1 00:16:00.036498 ignition[901]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:16:00.036509 ignition[901]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:16:00.036539 ignition[901]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 1 00:16:00.158867 ignition[901]: GET result: OK Nov 1 00:16:00.158951 ignition[901]: config has been read from IMDS userdata Nov 1 00:16:00.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:00.162608 unknown[901]: fetched base config from "system" Nov 1 00:16:00.195363 kernel: audit: type=1130 audit(1761956160.168:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:00.159005 ignition[901]: parsing config with SHA512: 46732d0bb6ea3ee1464f26fea66fdfae49a635e234d758d28bf3f050748820bc202876c379f39adbed65c30607d367407467ede7cd23124b35ea836467ef057d Nov 1 00:16:00.162616 unknown[901]: fetched base config from "system" Nov 1 00:16:00.163199 ignition[901]: fetch: fetch complete Nov 1 00:16:00.162621 unknown[901]: fetched user config from "azure" Nov 1 00:16:00.163204 ignition[901]: fetch: fetch passed Nov 1 00:16:00.164418 systemd[1]: Finished ignition-fetch.service. Nov 1 00:16:00.163250 ignition[901]: Ignition finished successfully Nov 1 00:16:00.189223 systemd[1]: Starting ignition-kargs.service... Nov 1 00:16:00.250513 kernel: audit: type=1130 audit(1761956160.227:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:00.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:00.203438 ignition[907]: Ignition 2.14.0 Nov 1 00:16:00.219330 systemd[1]: Finished ignition-kargs.service. Nov 1 00:16:00.203444 ignition[907]: Stage: kargs Nov 1 00:16:00.228960 systemd[1]: Starting ignition-disks.service... Nov 1 00:16:00.290536 kernel: audit: type=1130 audit(1761956160.263:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:00.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:00.203546 ignition[907]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:16:00.259358 systemd[1]: Finished ignition-disks.service. Nov 1 00:16:00.203571 ignition[907]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:16:00.264049 systemd[1]: Reached target initrd-root-device.target. Nov 1 00:16:00.208591 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:16:00.289863 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:16:00.216030 ignition[907]: kargs: kargs passed Nov 1 00:16:00.295268 systemd[1]: Reached target local-fs.target. Nov 1 00:16:00.216111 ignition[907]: Ignition finished successfully Nov 1 00:16:00.305973 systemd[1]: Reached target sysinit.target. Nov 1 00:16:00.238409 ignition[913]: Ignition 2.14.0 Nov 1 00:16:00.314120 systemd[1]: Reached target basic.target. Nov 1 00:16:00.238415 ignition[913]: Stage: disks Nov 1 00:16:00.328631 systemd[1]: Starting systemd-fsck-root.service... Nov 1 00:16:00.238515 ignition[913]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:16:00.238533 ignition[913]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:16:00.241453 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:16:00.251445 ignition[913]: disks: disks passed Nov 1 00:16:00.251494 ignition[913]: Ignition finished successfully Nov 1 00:16:00.451984 systemd-fsck[921]: ROOT: clean, 637/7326000 files, 481087/7359488 blocks Nov 1 00:16:00.462058 systemd[1]: Finished systemd-fsck-root.service. Nov 1 00:16:00.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:00.491541 kernel: audit: type=1130 audit(1761956160.466:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:00.489371 systemd[1]: Mounting sysroot.mount... Nov 1 00:16:00.519776 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 00:16:00.520156 systemd[1]: Mounted sysroot.mount. Nov 1 00:16:00.524148 systemd[1]: Reached target initrd-root-fs.target. Nov 1 00:16:00.567848 systemd[1]: Mounting sysroot-usr.mount... Nov 1 00:16:00.572553 systemd[1]: Starting flatcar-metadata-hostname.service... Nov 1 00:16:00.585481 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:16:00.585521 systemd[1]: Reached target ignition-diskful.target. Nov 1 00:16:00.601262 systemd[1]: Mounted sysroot-usr.mount. Nov 1 00:16:00.656656 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:16:00.662195 systemd[1]: Starting initrd-setup-root.service... Nov 1 00:16:00.690982 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (932) Nov 1 00:16:00.697751 initrd-setup-root[937]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:16:00.712828 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 1 00:16:00.712848 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:16:00.712863 kernel: BTRFS info (device sda6): has skinny extents Nov 1 00:16:00.720561 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:16:00.904346 initrd-setup-root[963]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:16:00.929290 initrd-setup-root[971]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:16:00.953548 initrd-setup-root[979]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:16:01.654090 systemd[1]: Finished initrd-setup-root.service. Nov 1 00:16:01.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:01.659600 systemd[1]: Starting ignition-mount.service... Nov 1 00:16:01.687258 kernel: audit: type=1130 audit(1761956161.658:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:01.687858 systemd[1]: Starting sysroot-boot.service... Nov 1 00:16:01.692240 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Nov 1 00:16:01.692342 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Nov 1 00:16:01.731038 ignition[1000]: INFO : Ignition 2.14.0 Nov 1 00:16:01.731038 ignition[1000]: INFO : Stage: mount Nov 1 00:16:01.745957 ignition[1000]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:16:01.745957 ignition[1000]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:16:01.745957 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:16:01.745957 ignition[1000]: INFO : mount: mount passed Nov 1 00:16:01.745957 ignition[1000]: INFO : Ignition finished successfully Nov 1 00:16:01.820439 kernel: audit: type=1130 audit(1761956161.745:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:01.820470 kernel: audit: type=1130 audit(1761956161.773:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:01.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:01.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:01.741085 systemd[1]: Finished sysroot-boot.service. Nov 1 00:16:01.767658 systemd[1]: Finished ignition-mount.service. Nov 1 00:16:03.117371 coreos-metadata[931]: Nov 01 00:16:03.117 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 1 00:16:03.127427 coreos-metadata[931]: Nov 01 00:16:03.127 INFO Fetch successful Nov 1 00:16:03.162052 coreos-metadata[931]: Nov 01 00:16:03.162 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 1 00:16:03.174156 coreos-metadata[931]: Nov 01 00:16:03.174 INFO Fetch successful Nov 1 00:16:03.198251 coreos-metadata[931]: Nov 01 00:16:03.198 INFO wrote hostname ci-3510.3.8-n-c51a7922c9 to /sysroot/etc/hostname Nov 1 00:16:03.207323 systemd[1]: Finished flatcar-metadata-hostname.service. Nov 1 00:16:03.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:03.213847 systemd[1]: Starting ignition-files.service... Nov 1 00:16:03.241165 kernel: audit: type=1130 audit(1761956163.212:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:03.239978 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:16:03.263744 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1010) Nov 1 00:16:03.275530 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 1 00:16:03.275556 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:16:03.275574 kernel: BTRFS info (device sda6): has skinny extents Nov 1 00:16:03.288775 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:16:03.302079 ignition[1029]: INFO : Ignition 2.14.0 Nov 1 00:16:03.302079 ignition[1029]: INFO : Stage: files Nov 1 00:16:03.311292 ignition[1029]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:16:03.311292 ignition[1029]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:16:03.331384 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:16:03.331384 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:16:03.331384 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:16:03.331384 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:16:03.457388 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:16:03.465355 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:16:03.504072 unknown[1029]: wrote ssh authorized keys file for user: core Nov 1 00:16:03.510046 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:16:03.520088 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:16:03.520088 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:16:03.520088 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 1 00:16:03.520088 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Nov 1 00:16:03.732670 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 00:16:03.847651 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 1 00:16:03.859026 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:16:03.859026 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:16:03.859026 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:16:03.859026 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:16:03.859026 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:16:03.859026 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:16:03.859026 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:16:03.859026 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:16:03.940964 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:16:03.940964 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:16:03.940964 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 1 00:16:03.940964 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 1 00:16:03.940964 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Nov 1 00:16:03.940964 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:16:03.940964 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1459627486" Nov 1 00:16:03.940964 ignition[1029]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1459627486": device or resource busy Nov 1 00:16:03.940964 ignition[1029]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1459627486", trying btrfs: device or resource busy Nov 1 00:16:03.940964 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1459627486" Nov 1 00:16:03.940964 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1459627486" Nov 1 00:16:03.940964 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem1459627486" Nov 1 00:16:03.940964 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem1459627486" Nov 1 00:16:03.940964 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Nov 1 00:16:03.922867 systemd[1]: mnt-oem1459627486.mount: Deactivated successfully. Nov 1 00:16:04.103647 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Nov 1 00:16:04.103647 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:16:04.103647 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3633891852" Nov 1 00:16:04.103647 ignition[1029]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3633891852": device or resource busy Nov 1 00:16:04.103647 ignition[1029]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3633891852", trying btrfs: device or resource busy Nov 1 00:16:04.103647 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3633891852" Nov 1 00:16:04.103647 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3633891852" Nov 1 00:16:04.103647 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem3633891852" Nov 1 00:16:04.103647 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem3633891852" Nov 1 00:16:04.103647 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Nov 1 00:16:04.103647 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 1 00:16:04.103647 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Nov 1 00:16:04.476687 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Nov 1 00:16:04.754184 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 1 00:16:04.754184 ignition[1029]: INFO : files: op(14): [started] processing unit "waagent.service" Nov 1 00:16:04.754184 ignition[1029]: INFO : files: op(14): [finished] processing unit "waagent.service" Nov 1 00:16:04.754184 ignition[1029]: INFO : files: op(15): [started] processing unit "nvidia.service" Nov 1 00:16:04.754184 ignition[1029]: INFO : files: op(15): [finished] processing unit "nvidia.service" Nov 1 00:16:04.754184 ignition[1029]: INFO : files: op(16): [started] processing unit "containerd.service" Nov 1 00:16:04.858777 kernel: audit: type=1130 audit(1761956164.772:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:04.858805 kernel: audit: type=1130 audit(1761956164.835:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:04.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:04.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:04.768304 systemd[1]: Finished ignition-files.service. Nov 1 00:16:04.865217 ignition[1029]: INFO : files: op(16): op(17): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:16:04.865217 ignition[1029]: INFO : files: op(16): op(17): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:16:04.865217 ignition[1029]: INFO : files: op(16): [finished] processing unit "containerd.service" Nov 1 00:16:04.865217 ignition[1029]: INFO : files: op(18): [started] processing unit "prepare-helm.service" Nov 1 00:16:04.865217 ignition[1029]: INFO : files: op(18): op(19): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:16:04.865217 ignition[1029]: INFO : files: op(18): op(19): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:16:04.865217 ignition[1029]: INFO : files: op(18): [finished] processing unit "prepare-helm.service" Nov 1 00:16:04.865217 ignition[1029]: INFO : files: op(1a): [started] setting preset to enabled for "waagent.service" Nov 1 00:16:04.865217 ignition[1029]: INFO : files: op(1a): [finished] setting preset to enabled for "waagent.service" Nov 1 00:16:04.865217 ignition[1029]: INFO : files: op(1b): [started] setting preset to enabled for "nvidia.service" Nov 1 00:16:04.865217 ignition[1029]: INFO : files: op(1b): [finished] setting preset to enabled for "nvidia.service" Nov 1 00:16:04.865217 ignition[1029]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:16:04.865217 ignition[1029]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:16:04.865217 ignition[1029]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:16:04.865217 ignition[1029]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:16:04.865217 ignition[1029]: INFO : files: files passed Nov 1 00:16:04.865217 ignition[1029]: INFO : Ignition finished successfully Nov 1 00:16:04.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:04.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:04.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:04.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:04.775879 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 00:16:04.801432 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 00:16:05.079444 initrd-setup-root-after-ignition[1053]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:16:04.802299 systemd[1]: Starting ignition-quench.service... Nov 1 00:16:04.815001 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 00:16:05.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:04.861363 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:16:04.861465 systemd[1]: Finished ignition-quench.service. Nov 1 00:16:04.869821 systemd[1]: Reached target ignition-complete.target. Nov 1 00:16:04.900274 systemd[1]: Starting initrd-parse-etc.service... Nov 1 00:16:04.932059 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:16:04.932179 systemd[1]: Finished initrd-parse-etc.service. Nov 1 00:16:04.946455 systemd[1]: Reached target initrd-fs.target. Nov 1 00:16:04.957984 systemd[1]: Reached target initrd.target. Nov 1 00:16:04.969964 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 00:16:04.977370 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 00:16:05.026517 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 00:16:05.042957 systemd[1]: Starting initrd-cleanup.service... Nov 1 00:16:05.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.060422 systemd[1]: Stopped target nss-lookup.target. Nov 1 00:16:05.258125 kernel: kauditd_printk_skb: 6 callbacks suppressed Nov 1 00:16:05.258149 kernel: audit: type=1131 audit(1761956165.219:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.068794 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 00:16:05.302857 kernel: audit: type=1131 audit(1761956165.253:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.302883 kernel: audit: type=1131 audit(1761956165.279:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.085174 systemd[1]: Stopped target timers.target. Nov 1 00:16:05.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.099387 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:16:05.336169 kernel: audit: type=1131 audit(1761956165.307:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.099456 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 00:16:05.366409 kernel: audit: type=1131 audit(1761956165.330:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.108806 systemd[1]: Stopped target initrd.target. Nov 1 00:16:05.398866 kernel: audit: type=1131 audit(1761956165.375:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.398976 ignition[1067]: INFO : Ignition 2.14.0 Nov 1 00:16:05.398976 ignition[1067]: INFO : Stage: umount Nov 1 00:16:05.398976 ignition[1067]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:16:05.398976 ignition[1067]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Nov 1 00:16:05.398976 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:16:05.398976 ignition[1067]: INFO : umount: umount passed Nov 1 00:16:05.398976 ignition[1067]: INFO : Ignition finished successfully Nov 1 00:16:05.553715 kernel: audit: type=1131 audit(1761956165.403:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.553748 kernel: audit: type=1130 audit(1761956165.427:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.553758 kernel: audit: type=1131 audit(1761956165.427:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.553768 kernel: audit: type=1131 audit(1761956165.471:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.118823 systemd[1]: Stopped target basic.target. Nov 1 00:16:05.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.128421 systemd[1]: Stopped target ignition-complete.target. Nov 1 00:16:05.137256 systemd[1]: Stopped target ignition-diskful.target. Nov 1 00:16:05.146568 systemd[1]: Stopped target initrd-root-device.target. Nov 1 00:16:05.156190 systemd[1]: Stopped target remote-fs.target. Nov 1 00:16:05.165683 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 00:16:05.174696 systemd[1]: Stopped target sysinit.target. Nov 1 00:16:05.182715 systemd[1]: Stopped target local-fs.target. Nov 1 00:16:05.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.191300 systemd[1]: Stopped target local-fs-pre.target. Nov 1 00:16:05.202704 systemd[1]: Stopped target swap.target. Nov 1 00:16:05.210853 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:16:05.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.210923 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 00:16:05.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.669000 audit: BPF prog-id=6 op=UNLOAD Nov 1 00:16:05.234819 systemd[1]: Stopped target cryptsetup.target. Nov 1 00:16:05.249556 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:16:05.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.249618 systemd[1]: Stopped dracut-initqueue.service. Nov 1 00:16:05.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.254090 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:16:05.254132 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 00:16:05.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.279190 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:16:05.279245 systemd[1]: Stopped ignition-files.service. Nov 1 00:16:05.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.307421 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 00:16:05.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.307476 systemd[1]: Stopped flatcar-metadata-hostname.service. Nov 1 00:16:05.331493 systemd[1]: Stopping ignition-mount.service... Nov 1 00:16:05.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.358315 systemd[1]: Stopping sysroot-boot.service... Nov 1 00:16:05.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.368096 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:16:05.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.368171 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 00:16:05.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.377882 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:16:05.838314 kernel: hv_netvsc 000d3a07-5314-000d-3a07-5314000d3a07 eth0: Data path switched from VF: enP30301s1 Nov 1 00:16:05.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.377973 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 00:16:05.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.405162 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:16:05.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.405840 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:16:05.405939 systemd[1]: Finished initrd-cleanup.service. Nov 1 00:16:05.428935 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:16:05.429046 systemd[1]: Stopped ignition-mount.service. Nov 1 00:16:05.472377 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:16:05.472441 systemd[1]: Stopped ignition-disks.service. Nov 1 00:16:05.504585 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:16:05.504644 systemd[1]: Stopped ignition-kargs.service. Nov 1 00:16:05.525189 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:16:05.525230 systemd[1]: Stopped ignition-fetch.service. Nov 1 00:16:05.538252 systemd[1]: Stopped target network.target. Nov 1 00:16:05.548635 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:16:05.548703 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 00:16:05.559297 systemd[1]: Stopped target paths.target. Nov 1 00:16:05.568462 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:16:05.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:05.571747 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 00:16:05.579806 systemd[1]: Stopped target slices.target. Nov 1 00:16:05.589748 systemd[1]: Stopped target sockets.target. Nov 1 00:16:05.600348 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:16:05.600391 systemd[1]: Closed iscsid.socket. Nov 1 00:16:05.609366 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:16:05.609387 systemd[1]: Closed iscsiuio.socket. Nov 1 00:16:05.618853 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:16:05.618895 systemd[1]: Stopped ignition-setup.service. Nov 1 00:16:05.627673 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:16:05.635861 systemd[1]: Stopping systemd-resolved.service... Nov 1 00:16:05.645259 systemd-networkd[868]: eth0: DHCPv6 lease lost Nov 1 00:16:05.998000 audit: BPF prog-id=9 op=UNLOAD Nov 1 00:16:05.646608 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:16:05.646702 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:16:05.656151 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:16:05.656258 systemd[1]: Stopped systemd-resolved.service. Nov 1 00:16:05.666304 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:16:05.666346 systemd[1]: Closed systemd-networkd.socket. Nov 1 00:16:05.675431 systemd[1]: Stopping network-cleanup.service... Nov 1 00:16:05.683644 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:16:05.683705 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 00:16:05.689122 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:16:05.689172 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:16:05.703474 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:16:05.703518 systemd[1]: Stopped systemd-modules-load.service. Nov 1 00:16:05.717676 systemd[1]: Stopping systemd-udevd.service... Nov 1 00:16:05.728008 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:16:05.728536 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:16:05.728637 systemd[1]: Stopped sysroot-boot.service. Nov 1 00:16:05.738443 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:16:05.738565 systemd[1]: Stopped systemd-udevd.service. Nov 1 00:16:05.747931 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:16:05.747976 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 00:16:05.758304 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:16:05.758340 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 00:16:05.763097 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:16:05.763141 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 00:16:05.773711 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:16:05.773814 systemd[1]: Stopped dracut-cmdline.service. Nov 1 00:16:05.783933 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:16:06.136000 audit: BPF prog-id=5 op=UNLOAD Nov 1 00:16:06.136000 audit: BPF prog-id=4 op=UNLOAD Nov 1 00:16:06.136000 audit: BPF prog-id=3 op=UNLOAD Nov 1 00:16:06.136000 audit: BPF prog-id=8 op=UNLOAD Nov 1 00:16:06.136000 audit: BPF prog-id=7 op=UNLOAD Nov 1 00:16:05.783978 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 00:16:05.792911 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:16:05.792954 systemd[1]: Stopped initrd-setup-root.service. Nov 1 00:16:05.804422 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 00:16:05.813081 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:16:05.813147 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Nov 1 00:16:06.185747 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Nov 1 00:16:05.827631 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:16:06.185835 iscsid[873]: iscsid shutting down. Nov 1 00:16:05.827676 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 00:16:05.832585 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:16:05.832624 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 00:16:05.844120 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 1 00:16:05.844580 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:16:05.844668 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 00:16:05.922970 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:16:05.923086 systemd[1]: Stopped network-cleanup.service. Nov 1 00:16:05.931436 systemd[1]: Reached target initrd-switch-root.target. Nov 1 00:16:05.943329 systemd[1]: Starting initrd-switch-root.service... Nov 1 00:16:06.136541 systemd[1]: Switching root. Nov 1 00:16:06.186175 systemd-journald[276]: Journal stopped Nov 1 00:16:44.394943 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 00:16:44.394963 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 00:16:44.394973 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 00:16:44.394983 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:16:44.394991 kernel: SELinux: policy capability open_perms=1 Nov 1 00:16:44.394999 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:16:44.395008 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:16:44.395016 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:16:44.395024 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:16:44.395032 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:16:44.395040 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:16:44.395049 kernel: kauditd_printk_skb: 30 callbacks suppressed Nov 1 00:16:44.395058 kernel: audit: type=1403 audit(1761956171.222:86): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:16:44.395067 systemd[1]: Successfully loaded SELinux policy in 523.293ms. Nov 1 00:16:44.395078 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.464ms. Nov 1 00:16:44.395090 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:16:44.395099 systemd[1]: Detected virtualization microsoft. Nov 1 00:16:44.395108 systemd[1]: Detected architecture arm64. Nov 1 00:16:44.395117 systemd[1]: Detected first boot. Nov 1 00:16:44.395126 systemd[1]: Hostname set to . Nov 1 00:16:44.395136 systemd[1]: Initializing machine ID from random generator. Nov 1 00:16:44.395145 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 00:16:44.395156 kernel: audit: type=1400 audit(1761956174.644:87): avc: denied { associate } for pid=1120 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 00:16:44.395166 kernel: audit: type=1300 audit(1761956174.644:87): arch=c00000b7 syscall=5 success=yes exit=0 a0=400014764c a1=40000c8ae0 a2=40000cea00 a3=32 items=0 ppid=1103 pid=1120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:16:44.395175 kernel: audit: type=1327 audit(1761956174.644:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:16:44.395184 kernel: audit: type=1400 audit(1761956174.658:88): avc: denied { associate } for pid=1120 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Nov 1 00:16:44.395194 kernel: audit: type=1300 audit(1761956174.658:88): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000147729 a2=1ed a3=0 items=2 ppid=1103 pid=1120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:16:44.395204 kernel: audit: type=1307 audit(1761956174.658:88): cwd="/" Nov 1 00:16:44.395213 kernel: audit: type=1302 audit(1761956174.658:88): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:16:44.395222 kernel: audit: type=1302 audit(1761956174.658:88): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:16:44.395231 kernel: audit: type=1327 audit(1761956174.658:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:16:44.395240 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:16:44.395249 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:16:44.395258 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:16:44.395269 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:16:44.395278 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:16:44.395287 systemd[1]: Unnecessary job was removed for dev-sda6.device. Nov 1 00:16:44.395296 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 00:16:44.395307 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 00:16:44.395316 systemd[1]: Created slice system-getty.slice. Nov 1 00:16:44.395327 systemd[1]: Created slice system-modprobe.slice. Nov 1 00:16:44.395339 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 00:16:44.395348 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 00:16:44.395357 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 00:16:44.395366 systemd[1]: Created slice user.slice. Nov 1 00:16:44.395376 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:16:44.395385 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 00:16:44.395394 systemd[1]: Set up automount boot.automount. Nov 1 00:16:44.395403 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 00:16:44.395412 systemd[1]: Reached target integritysetup.target. Nov 1 00:16:44.395422 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:16:44.395431 systemd[1]: Reached target remote-fs.target. Nov 1 00:16:44.395440 systemd[1]: Reached target slices.target. Nov 1 00:16:44.395449 systemd[1]: Reached target swap.target. Nov 1 00:16:44.395458 systemd[1]: Reached target torcx.target. Nov 1 00:16:44.395467 systemd[1]: Reached target veritysetup.target. Nov 1 00:16:44.395476 systemd[1]: Listening on systemd-coredump.socket. Nov 1 00:16:44.395486 systemd[1]: Listening on systemd-initctl.socket. Nov 1 00:16:44.395496 kernel: audit: type=1400 audit(1761956203.895:89): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:16:44.395505 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:16:44.395515 kernel: audit: type=1335 audit(1761956203.895:90): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Nov 1 00:16:44.395524 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:16:44.395535 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:16:44.395544 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:16:44.395553 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:16:44.395563 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:16:44.395573 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 00:16:44.395582 systemd[1]: Mounting dev-hugepages.mount... Nov 1 00:16:44.395591 systemd[1]: Mounting dev-mqueue.mount... Nov 1 00:16:44.395600 systemd[1]: Mounting media.mount... Nov 1 00:16:44.395609 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 00:16:44.395619 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 00:16:44.395629 systemd[1]: Mounting tmp.mount... Nov 1 00:16:44.395638 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 00:16:44.395647 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:16:44.395657 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:16:44.395666 systemd[1]: Starting modprobe@configfs.service... Nov 1 00:16:44.395675 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:16:44.395684 systemd[1]: Starting modprobe@drm.service... Nov 1 00:16:44.395694 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:16:44.395704 systemd[1]: Starting modprobe@fuse.service... Nov 1 00:16:44.395714 systemd[1]: Starting modprobe@loop.service... Nov 1 00:16:44.395732 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:16:44.395744 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 1 00:16:44.395754 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Nov 1 00:16:44.395763 systemd[1]: Starting systemd-journald.service... Nov 1 00:16:44.395772 kernel: loop: module loaded Nov 1 00:16:44.395780 kernel: fuse: init (API version 7.34) Nov 1 00:16:44.395789 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:16:44.395800 systemd[1]: Starting systemd-network-generator.service... Nov 1 00:16:44.395810 systemd[1]: Starting systemd-remount-fs.service... Nov 1 00:16:44.395819 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:16:44.395828 kernel: audit: type=1305 audit(1761956204.369:91): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 00:16:44.395837 systemd[1]: Mounted dev-hugepages.mount. Nov 1 00:16:44.395849 systemd-journald[1231]: Journal started Nov 1 00:16:44.395887 systemd-journald[1231]: Runtime Journal (/run/log/journal/3689763710c340c28806b2ca02690a8e) is 8.0M, max 78.5M, 70.5M free. Nov 1 00:16:43.895000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Nov 1 00:16:44.369000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 00:16:44.369000 audit[1231]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffe2f0ca90 a2=4000 a3=1 items=0 ppid=1 pid=1231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:16:44.426262 kernel: audit: type=1300 audit(1761956204.369:91): arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffe2f0ca90 a2=4000 a3=1 items=0 ppid=1 pid=1231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:16:44.426300 systemd[1]: Started systemd-journald.service. Nov 1 00:16:44.443568 kernel: audit: type=1327 audit(1761956204.369:91): proctitle="/usr/lib/systemd/systemd-journald" Nov 1 00:16:44.369000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 00:16:44.444157 systemd[1]: Mounted dev-mqueue.mount. Nov 1 00:16:44.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.469841 kernel: audit: type=1130 audit(1761956204.442:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.471664 systemd[1]: Mounted media.mount. Nov 1 00:16:44.476073 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 00:16:44.480912 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 00:16:44.485758 systemd[1]: Mounted tmp.mount. Nov 1 00:16:44.489709 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 00:16:44.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.495264 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:16:44.513743 kernel: audit: type=1130 audit(1761956204.494:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.518091 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:16:44.518281 systemd[1]: Finished modprobe@configfs.service. Nov 1 00:16:44.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.541252 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:16:44.541413 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:16:44.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.565767 kernel: audit: type=1130 audit(1761956204.517:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.565816 kernel: audit: type=1130 audit(1761956204.540:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.565169 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:16:44.565321 systemd[1]: Finished modprobe@drm.service. Nov 1 00:16:44.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.587782 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:16:44.588049 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:16:44.589494 kernel: audit: type=1131 audit(1761956204.540:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.595177 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:16:44.595383 systemd[1]: Finished modprobe@fuse.service. Nov 1 00:16:44.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.600364 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:16:44.600626 systemd[1]: Finished modprobe@loop.service. Nov 1 00:16:44.605887 systemd[1]: Finished systemd-network-generator.service. Nov 1 00:16:44.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.611998 systemd[1]: Finished systemd-remount-fs.service. Nov 1 00:16:44.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.617475 systemd[1]: Reached target network-pre.target. Nov 1 00:16:44.623591 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 00:16:44.629484 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 00:16:44.634458 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:16:44.668353 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 00:16:44.674201 systemd[1]: Starting systemd-journal-flush.service... Nov 1 00:16:44.678817 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:16:44.679964 systemd[1]: Starting systemd-random-seed.service... Nov 1 00:16:44.684409 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:16:44.685676 systemd[1]: Starting systemd-sysusers.service... Nov 1 00:16:44.691824 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:16:44.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.697596 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:16:44.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.702957 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 00:16:44.708388 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 00:16:44.714639 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:16:44.720074 systemd[1]: Starting systemd-udev-settle.service... Nov 1 00:16:44.730916 udevadm[1272]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 00:16:44.764174 systemd-journald[1231]: Time spent on flushing to /var/log/journal/3689763710c340c28806b2ca02690a8e is 13.165ms for 1036 entries. Nov 1 00:16:44.764174 systemd-journald[1231]: System Journal (/var/log/journal/3689763710c340c28806b2ca02690a8e) is 8.0M, max 2.6G, 2.6G free. Nov 1 00:16:44.858375 systemd-journald[1231]: Received client request to flush runtime journal. Nov 1 00:16:44.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.775050 systemd[1]: Finished systemd-random-seed.service. Nov 1 00:16:44.780557 systemd[1]: Reached target first-boot-complete.target. Nov 1 00:16:44.859417 systemd[1]: Finished systemd-journal-flush.service. Nov 1 00:16:44.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:44.941770 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:16:44.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:45.638781 systemd[1]: Finished systemd-sysusers.service. Nov 1 00:16:45.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:45.645096 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:16:46.496365 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 00:16:46.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:46.607957 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:16:46.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:46.614604 systemd[1]: Starting systemd-udevd.service... Nov 1 00:16:46.633364 systemd-udevd[1283]: Using default interface naming scheme 'v252'. Nov 1 00:16:47.865014 systemd[1]: Started systemd-udevd.service. Nov 1 00:16:47.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:47.877218 systemd[1]: Starting systemd-networkd.service... Nov 1 00:16:47.903832 systemd[1]: Found device dev-ttyAMA0.device. Nov 1 00:16:47.970161 systemd[1]: Starting systemd-userdbd.service... Nov 1 00:16:47.972205 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:16:47.978000 audit[1296]: AVC avc: denied { confidentiality } for pid=1296 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:16:47.987750 kernel: hv_vmbus: registering driver hv_balloon Nov 1 00:16:47.987855 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 1 00:16:47.997533 kernel: hv_balloon: Memory hot add disabled on ARM64 Nov 1 00:16:47.978000 audit[1296]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaab0e73c960 a1=aa2c a2=ffffa34624b0 a3=aaab0e698010 items=12 ppid=1283 pid=1296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:16:47.978000 audit: CWD cwd="/" Nov 1 00:16:47.978000 audit: PATH item=0 name=(null) inode=7236 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:16:47.978000 audit: PATH item=1 name=(null) inode=10705 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:16:47.978000 audit: PATH item=2 name=(null) inode=10705 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:16:47.978000 audit: PATH item=3 name=(null) inode=10706 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:16:47.978000 audit: PATH item=4 name=(null) inode=10705 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:16:47.978000 audit: PATH item=5 name=(null) inode=10707 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:16:47.978000 audit: PATH item=6 name=(null) inode=10705 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:16:47.978000 audit: PATH item=7 name=(null) inode=10708 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:16:47.978000 audit: PATH item=8 name=(null) inode=10705 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:16:47.978000 audit: PATH item=9 name=(null) inode=10709 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:16:47.978000 audit: PATH item=10 name=(null) inode=10705 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:16:47.978000 audit: PATH item=11 name=(null) inode=10710 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:16:47.978000 audit: PROCTITLE proctitle="(udev-worker)" Nov 1 00:16:48.030302 kernel: hv_vmbus: registering driver hyperv_fb Nov 1 00:16:48.030392 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 1 00:16:48.043449 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 1 00:16:48.048973 kernel: Console: switching to colour dummy device 80x25 Nov 1 00:16:48.050745 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 00:16:48.061755 kernel: hv_utils: Registering HyperV Utility Driver Nov 1 00:16:48.061844 kernel: hv_vmbus: registering driver hv_utils Nov 1 00:16:48.061869 kernel: hv_utils: Shutdown IC version 3.2 Nov 1 00:16:48.061885 kernel: hv_utils: Heartbeat IC version 3.0 Nov 1 00:16:48.067758 kernel: hv_utils: TimeSync IC version 4.0 Nov 1 00:16:48.387737 systemd[1]: Started systemd-userdbd.service. Nov 1 00:16:48.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:48.643760 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:16:48.655765 systemd[1]: Finished systemd-udev-settle.service. Nov 1 00:16:48.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:48.662529 systemd[1]: Starting lvm2-activation-early.service... Nov 1 00:16:48.990810 lvm[1361]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:16:49.067382 systemd[1]: Finished lvm2-activation-early.service. Nov 1 00:16:49.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.073270 systemd[1]: Reached target cryptsetup.target. Nov 1 00:16:49.079467 systemd[1]: Starting lvm2-activation.service... Nov 1 00:16:49.083624 lvm[1363]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:16:49.103397 systemd[1]: Finished lvm2-activation.service. Nov 1 00:16:49.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.108672 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:16:49.113688 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:16:49.113713 systemd[1]: Reached target local-fs.target. Nov 1 00:16:49.118738 systemd[1]: Reached target machines.target. Nov 1 00:16:49.124604 systemd[1]: Starting ldconfig.service... Nov 1 00:16:49.161643 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:16:49.161714 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:16:49.162945 systemd[1]: Starting systemd-boot-update.service... Nov 1 00:16:49.168512 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 00:16:49.175335 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 00:16:49.181055 systemd[1]: Starting systemd-sysext.service... Nov 1 00:16:49.187117 systemd-networkd[1304]: lo: Link UP Nov 1 00:16:49.187125 systemd-networkd[1304]: lo: Gained carrier Nov 1 00:16:49.187573 systemd-networkd[1304]: Enumeration completed Nov 1 00:16:49.187724 systemd[1]: Started systemd-networkd.service. Nov 1 00:16:49.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.196841 kernel: kauditd_printk_skb: 41 callbacks suppressed Nov 1 00:16:49.197102 kernel: audit: type=1130 audit(1761956209.191:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.198616 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:16:49.236957 systemd-networkd[1304]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:16:49.238531 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1366 (bootctl) Nov 1 00:16:49.239904 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 00:16:49.297309 kernel: mlx5_core 765d:00:02.0 enP30301s1: Link up Nov 1 00:16:49.301983 kernel: buffer_size[0]=0 is not enough for lossless buffer Nov 1 00:16:49.316826 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 00:16:49.323385 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 00:16:49.336668 kernel: hv_netvsc 000d3a07-5314-000d-3a07-5314000d3a07 eth0: Data path switched to VF: enP30301s1 Nov 1 00:16:49.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.339141 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:16:49.340239 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 00:16:49.342028 systemd-networkd[1304]: enP30301s1: Link UP Nov 1 00:16:49.342570 systemd-networkd[1304]: eth0: Link UP Nov 1 00:16:49.342680 systemd-networkd[1304]: eth0: Gained carrier Nov 1 00:16:49.365268 kernel: audit: type=1130 audit(1761956209.336:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.365408 kernel: audit: type=1130 audit(1761956209.363:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.364975 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 00:16:49.365243 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 00:16:49.390185 systemd-networkd[1304]: enP30301s1: Gained carrier Nov 1 00:16:49.395438 systemd-networkd[1304]: eth0: DHCPv4 address 10.200.20.42/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 1 00:16:49.550311 kernel: loop0: detected capacity change from 0 to 207008 Nov 1 00:16:49.618480 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:16:49.645311 kernel: loop1: detected capacity change from 0 to 207008 Nov 1 00:16:49.660236 (sd-sysext)[1383]: Using extensions 'kubernetes'. Nov 1 00:16:49.660896 (sd-sysext)[1383]: Merged extensions into '/usr'. Nov 1 00:16:49.677322 systemd[1]: Mounting usr-share-oem.mount... Nov 1 00:16:49.681542 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:16:49.682826 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:16:49.688137 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:16:49.695619 systemd[1]: Starting modprobe@loop.service... Nov 1 00:16:49.699768 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:16:49.699912 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:16:49.702664 systemd[1]: Mounted usr-share-oem.mount. Nov 1 00:16:49.707550 systemd-fsck[1378]: fsck.fat 4.2 (2021-01-31) Nov 1 00:16:49.707550 systemd-fsck[1378]: /dev/sda1: 236 files, 117310/258078 clusters Nov 1 00:16:49.709858 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 00:16:49.719109 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:16:49.719279 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:16:49.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.744793 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:16:49.745112 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:16:49.763592 kernel: audit: type=1130 audit(1761956209.717:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.763712 kernel: audit: type=1130 audit(1761956209.719:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.763738 kernel: audit: type=1131 audit(1761956209.719:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.790371 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:16:49.790706 systemd[1]: Finished modprobe@loop.service. Nov 1 00:16:49.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.832381 kernel: audit: type=1130 audit(1761956209.788:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.832451 kernel: audit: type=1131 audit(1761956209.788:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.833724 systemd[1]: Mounting boot.mount... Nov 1 00:16:49.850912 kernel: audit: type=1130 audit(1761956209.808:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.871749 kernel: audit: type=1131 audit(1761956209.808:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.872763 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:16:49.873820 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:16:49.875763 systemd[1]: Finished systemd-sysext.service. Nov 1 00:16:49.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.883949 systemd[1]: Mounted boot.mount. Nov 1 00:16:49.895898 systemd[1]: Starting ensure-sysext.service... Nov 1 00:16:49.901607 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 00:16:49.912663 systemd[1]: Finished systemd-boot-update.service. Nov 1 00:16:49.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:49.918028 systemd[1]: Reloading. Nov 1 00:16:49.948109 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 00:16:49.957213 /usr/lib/systemd/system-generators/torcx-generator[1424]: time="2025-11-01T00:16:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:16:49.957248 /usr/lib/systemd/system-generators/torcx-generator[1424]: time="2025-11-01T00:16:49Z" level=info msg="torcx already run" Nov 1 00:16:49.989979 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:16:50.007673 systemd-tmpfiles[1404]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:16:50.050645 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:16:50.050664 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:16:50.067587 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:16:50.139476 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:16:50.140589 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:16:50.146139 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:16:50.151631 systemd[1]: Starting modprobe@loop.service... Nov 1 00:16:50.155633 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:16:50.155758 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:16:50.156547 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:16:50.156710 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:16:50.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:50.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:50.162123 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:16:50.162279 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:16:50.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:50.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:50.167808 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:16:50.167968 systemd[1]: Finished modprobe@loop.service. Nov 1 00:16:50.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:50.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:50.174630 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:16:50.175841 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:16:50.181236 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:16:50.187139 systemd[1]: Starting modprobe@loop.service... Nov 1 00:16:50.191421 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:16:50.191541 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:16:50.192332 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:16:50.192493 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:16:50.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:50.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:50.197863 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:16:50.198009 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:16:50.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:50.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:50.203312 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:16:50.203472 systemd[1]: Finished modprobe@loop.service. Nov 1 00:16:50.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:50.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:50.210935 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:16:50.212233 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:16:50.217479 systemd[1]: Starting modprobe@drm.service... Nov 1 00:16:50.222729 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:16:50.228238 systemd[1]: Starting modprobe@loop.service... Nov 1 00:16:50.232411 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:16:50.232535 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:16:50.233496 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:16:50.233646 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:16:50.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:50.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:50.238991 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:16:50.239140 systemd[1]: Finished modprobe@drm.service. Nov 1 00:16:50.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:50.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:50.243886 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:16:50.244025 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:16:50.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:50.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:50.249249 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:16:50.249512 systemd[1]: Finished modprobe@loop.service. Nov 1 00:16:50.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:50.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:50.254849 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:16:50.254938 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:16:50.255942 systemd[1]: Finished ensure-sysext.service. Nov 1 00:16:50.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:50.605441 systemd-networkd[1304]: eth0: Gained IPv6LL Nov 1 00:16:50.612233 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:16:50.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:53.430386 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 00:16:53.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:53.437709 systemd[1]: Starting audit-rules.service... Nov 1 00:16:53.442951 systemd[1]: Starting clean-ca-certificates.service... Nov 1 00:16:53.448852 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 00:16:53.455758 systemd[1]: Starting systemd-resolved.service... Nov 1 00:16:53.461723 systemd[1]: Starting systemd-timesyncd.service... Nov 1 00:16:53.467740 systemd[1]: Starting systemd-update-utmp.service... Nov 1 00:16:53.472972 systemd[1]: Finished clean-ca-certificates.service. Nov 1 00:16:53.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:53.479228 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:16:53.530000 audit[1524]: SYSTEM_BOOT pid=1524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 00:16:53.534375 systemd[1]: Finished systemd-update-utmp.service. Nov 1 00:16:53.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:53.606698 systemd[1]: Started systemd-timesyncd.service. Nov 1 00:16:53.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:53.611949 systemd[1]: Reached target time-set.target. Nov 1 00:16:53.673798 systemd-resolved[1521]: Positive Trust Anchors: Nov 1 00:16:53.673813 systemd-resolved[1521]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:16:53.673840 systemd-resolved[1521]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:16:53.757382 systemd-resolved[1521]: Using system hostname 'ci-3510.3.8-n-c51a7922c9'. Nov 1 00:16:53.758914 systemd[1]: Started systemd-resolved.service. Nov 1 00:16:53.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:53.764155 systemd[1]: Reached target network.target. Nov 1 00:16:53.768908 systemd[1]: Reached target network-online.target. Nov 1 00:16:53.774375 systemd[1]: Reached target nss-lookup.target. Nov 1 00:16:53.947422 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 00:16:53.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:16:53.975000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:16:53.975000 audit[1540]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff0a91100 a2=420 a3=0 items=0 ppid=1517 pid=1540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:16:53.975000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 00:16:53.977213 augenrules[1540]: No rules Nov 1 00:16:53.978311 systemd[1]: Finished audit-rules.service. Nov 1 00:16:54.052848 systemd-timesyncd[1523]: Contacted time server 144.202.41.38:123 (0.flatcar.pool.ntp.org). Nov 1 00:16:54.053262 systemd-timesyncd[1523]: Initial clock synchronization to Sat 2025-11-01 00:16:54.050896 UTC. Nov 1 00:17:02.264121 ldconfig[1365]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:17:02.273639 systemd[1]: Finished ldconfig.service. Nov 1 00:17:02.279922 systemd[1]: Starting systemd-update-done.service... Nov 1 00:17:02.339315 systemd[1]: Finished systemd-update-done.service. Nov 1 00:17:02.344463 systemd[1]: Reached target sysinit.target. Nov 1 00:17:02.349099 systemd[1]: Started motdgen.path. Nov 1 00:17:02.353049 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 00:17:02.359902 systemd[1]: Started logrotate.timer. Nov 1 00:17:02.363993 systemd[1]: Started mdadm.timer. Nov 1 00:17:02.367704 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 00:17:02.372488 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:17:02.372523 systemd[1]: Reached target paths.target. Nov 1 00:17:02.376695 systemd[1]: Reached target timers.target. Nov 1 00:17:02.381694 systemd[1]: Listening on dbus.socket. Nov 1 00:17:02.387095 systemd[1]: Starting docker.socket... Nov 1 00:17:02.422750 systemd[1]: Listening on sshd.socket. Nov 1 00:17:02.427862 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:17:02.428463 systemd[1]: Listening on docker.socket. Nov 1 00:17:02.432812 systemd[1]: Reached target sockets.target. Nov 1 00:17:02.437257 systemd[1]: Reached target basic.target. Nov 1 00:17:02.441714 systemd[1]: System is tainted: cgroupsv1 Nov 1 00:17:02.441762 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:17:02.441785 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:17:02.442924 systemd[1]: Starting containerd.service... Nov 1 00:17:02.447876 systemd[1]: Starting dbus.service... Nov 1 00:17:02.452236 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 00:17:02.457824 systemd[1]: Starting extend-filesystems.service... Nov 1 00:17:02.462095 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 00:17:02.479605 systemd[1]: Starting kubelet.service... Nov 1 00:17:02.484405 systemd[1]: Starting motdgen.service... Nov 1 00:17:02.489420 systemd[1]: Started nvidia.service. Nov 1 00:17:02.513925 systemd[1]: Starting prepare-helm.service... Nov 1 00:17:02.519009 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 00:17:02.524659 systemd[1]: Starting sshd-keygen.service... Nov 1 00:17:02.530208 systemd[1]: Starting systemd-logind.service... Nov 1 00:17:02.534957 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:17:02.535022 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:17:02.536198 systemd[1]: Starting update-engine.service... Nov 1 00:17:02.541219 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 00:17:02.549106 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:17:02.549425 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 00:17:02.562633 jq[1555]: false Nov 1 00:17:02.564232 jq[1571]: true Nov 1 00:17:02.588165 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:17:02.588441 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 00:17:02.624936 extend-filesystems[1556]: Found loop1 Nov 1 00:17:02.629244 extend-filesystems[1556]: Found sda Nov 1 00:17:02.629244 extend-filesystems[1556]: Found sda1 Nov 1 00:17:02.629244 extend-filesystems[1556]: Found sda2 Nov 1 00:17:02.629244 extend-filesystems[1556]: Found sda3 Nov 1 00:17:02.629244 extend-filesystems[1556]: Found usr Nov 1 00:17:02.629244 extend-filesystems[1556]: Found sda4 Nov 1 00:17:02.629244 extend-filesystems[1556]: Found sda6 Nov 1 00:17:02.629244 extend-filesystems[1556]: Found sda7 Nov 1 00:17:02.629244 extend-filesystems[1556]: Found sda9 Nov 1 00:17:02.629244 extend-filesystems[1556]: Checking size of /dev/sda9 Nov 1 00:17:02.686715 jq[1583]: true Nov 1 00:17:02.629762 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:17:02.630050 systemd[1]: Finished motdgen.service. Nov 1 00:17:02.704227 env[1586]: time="2025-11-01T00:17:02.704165884Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 00:17:02.709924 systemd-logind[1567]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Nov 1 00:17:02.710901 systemd-logind[1567]: New seat seat0. Nov 1 00:17:02.766337 env[1586]: time="2025-11-01T00:17:02.766223808Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:17:02.766439 env[1586]: time="2025-11-01T00:17:02.766411394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:17:02.769992 tar[1576]: linux-arm64/LICENSE Nov 1 00:17:02.770242 tar[1576]: linux-arm64/helm Nov 1 00:17:02.774996 env[1586]: time="2025-11-01T00:17:02.774945767Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:17:02.774996 env[1586]: time="2025-11-01T00:17:02.774988724Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:17:02.775779 env[1586]: time="2025-11-01T00:17:02.775747468Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:17:02.775822 env[1586]: time="2025-11-01T00:17:02.775778186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:17:02.775822 env[1586]: time="2025-11-01T00:17:02.775795345Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 00:17:02.775822 env[1586]: time="2025-11-01T00:17:02.775805464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:17:02.775932 env[1586]: time="2025-11-01T00:17:02.775909816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:17:02.776156 env[1586]: time="2025-11-01T00:17:02.776129520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:17:02.777384 env[1586]: time="2025-11-01T00:17:02.777352190Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:17:02.777384 env[1586]: time="2025-11-01T00:17:02.777381788Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:17:02.777543 env[1586]: time="2025-11-01T00:17:02.777447263Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 00:17:02.777543 env[1586]: time="2025-11-01T00:17:02.777466062Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:17:02.793390 extend-filesystems[1556]: Old size kept for /dev/sda9 Nov 1 00:17:02.799096 extend-filesystems[1556]: Found sr0 Nov 1 00:17:02.793966 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:17:02.819689 env[1586]: time="2025-11-01T00:17:02.814834918Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:17:02.819689 env[1586]: time="2025-11-01T00:17:02.814879395Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:17:02.819689 env[1586]: time="2025-11-01T00:17:02.814892714Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:17:02.819689 env[1586]: time="2025-11-01T00:17:02.814925312Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:17:02.819689 env[1586]: time="2025-11-01T00:17:02.814943270Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:17:02.819689 env[1586]: time="2025-11-01T00:17:02.814957269Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:17:02.819689 env[1586]: time="2025-11-01T00:17:02.814970868Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:17:02.819689 env[1586]: time="2025-11-01T00:17:02.815321163Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:17:02.819689 env[1586]: time="2025-11-01T00:17:02.815339161Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 00:17:02.819689 env[1586]: time="2025-11-01T00:17:02.815352280Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:17:02.819689 env[1586]: time="2025-11-01T00:17:02.815364599Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:17:02.819689 env[1586]: time="2025-11-01T00:17:02.815377878Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:17:02.819689 env[1586]: time="2025-11-01T00:17:02.815495910Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:17:02.819689 env[1586]: time="2025-11-01T00:17:02.815573464Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:17:02.794222 systemd[1]: Finished extend-filesystems.service. Nov 1 00:17:02.820861 env[1586]: time="2025-11-01T00:17:02.815857163Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:17:02.820861 env[1586]: time="2025-11-01T00:17:02.815880721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:17:02.820861 env[1586]: time="2025-11-01T00:17:02.815896680Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:17:02.820861 env[1586]: time="2025-11-01T00:17:02.815938357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:17:02.820861 env[1586]: time="2025-11-01T00:17:02.815951836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:17:02.820861 env[1586]: time="2025-11-01T00:17:02.815965395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:17:02.820861 env[1586]: time="2025-11-01T00:17:02.815977354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:17:02.820861 env[1586]: time="2025-11-01T00:17:02.815989473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:17:02.820861 env[1586]: time="2025-11-01T00:17:02.816002433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:17:02.820861 env[1586]: time="2025-11-01T00:17:02.816013472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:17:02.820861 env[1586]: time="2025-11-01T00:17:02.816025671Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:17:02.820861 env[1586]: time="2025-11-01T00:17:02.816041590Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:17:02.820861 env[1586]: time="2025-11-01T00:17:02.816147302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:17:02.820861 env[1586]: time="2025-11-01T00:17:02.816163141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:17:02.820861 env[1586]: time="2025-11-01T00:17:02.816175180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:17:02.820747 systemd[1]: Started containerd.service. Nov 1 00:17:02.821940 env[1586]: time="2025-11-01T00:17:02.816186379Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:17:02.821940 env[1586]: time="2025-11-01T00:17:02.816200018Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 00:17:02.821940 env[1586]: time="2025-11-01T00:17:02.816211697Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:17:02.821940 env[1586]: time="2025-11-01T00:17:02.816228776Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 00:17:02.821940 env[1586]: time="2025-11-01T00:17:02.816262573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:17:02.822104 env[1586]: time="2025-11-01T00:17:02.817585076Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:17:02.822104 env[1586]: time="2025-11-01T00:17:02.817645872Z" level=info msg="Connect containerd service" Nov 1 00:17:02.822104 env[1586]: time="2025-11-01T00:17:02.817672630Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:17:02.822104 env[1586]: time="2025-11-01T00:17:02.819350427Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:17:02.822104 env[1586]: time="2025-11-01T00:17:02.819459779Z" level=info msg="Start subscribing containerd event" Nov 1 00:17:02.822104 env[1586]: time="2025-11-01T00:17:02.819500816Z" level=info msg="Start recovering state" Nov 1 00:17:02.822104 env[1586]: time="2025-11-01T00:17:02.819557531Z" level=info msg="Start event monitor" Nov 1 00:17:02.822104 env[1586]: time="2025-11-01T00:17:02.819575290Z" level=info msg="Start snapshots syncer" Nov 1 00:17:02.822104 env[1586]: time="2025-11-01T00:17:02.819584410Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:17:02.822104 env[1586]: time="2025-11-01T00:17:02.819594769Z" level=info msg="Start streaming server" Nov 1 00:17:02.822104 env[1586]: time="2025-11-01T00:17:02.820537140Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:17:02.822104 env[1586]: time="2025-11-01T00:17:02.820577137Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:17:02.842698 env[1586]: time="2025-11-01T00:17:02.841965686Z" level=info msg="containerd successfully booted in 0.138573s" Nov 1 00:17:02.833155 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 00:17:02.842793 bash[1610]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:17:03.011132 systemd[1]: nvidia.service: Deactivated successfully. Nov 1 00:17:03.325969 dbus-daemon[1554]: [system] SELinux support is enabled Nov 1 00:17:03.326160 systemd[1]: Started dbus.service. Nov 1 00:17:03.331848 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:17:03.331871 systemd[1]: Reached target system-config.target. Nov 1 00:17:03.341067 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:17:03.341093 systemd[1]: Reached target user-config.target. Nov 1 00:17:03.348362 systemd[1]: Started systemd-logind.service. Nov 1 00:17:03.451097 update_engine[1569]: I1101 00:17:03.434278 1569 main.cc:92] Flatcar Update Engine starting Nov 1 00:17:03.529700 systemd[1]: Started update-engine.service. Nov 1 00:17:03.529981 update_engine[1569]: I1101 00:17:03.529739 1569 update_check_scheduler.cc:74] Next update check in 8m46s Nov 1 00:17:03.537658 systemd[1]: Started locksmithd.service. Nov 1 00:17:03.608900 tar[1576]: linux-arm64/README.md Nov 1 00:17:03.614164 systemd[1]: Finished prepare-helm.service. Nov 1 00:17:03.675580 systemd[1]: Started kubelet.service. Nov 1 00:17:04.151775 kubelet[1676]: E1101 00:17:04.151715 1676 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:17:04.153459 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:17:04.153597 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:17:04.448669 sshd_keygen[1579]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:17:04.465138 systemd[1]: Finished sshd-keygen.service. Nov 1 00:17:04.471328 systemd[1]: Starting issuegen.service... Nov 1 00:17:04.476050 systemd[1]: Started waagent.service. Nov 1 00:17:04.480787 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:17:04.480991 systemd[1]: Finished issuegen.service. Nov 1 00:17:04.486707 systemd[1]: Starting systemd-user-sessions.service... Nov 1 00:17:04.542232 systemd[1]: Finished systemd-user-sessions.service. Nov 1 00:17:04.549399 systemd[1]: Started getty@tty1.service. Nov 1 00:17:04.554866 systemd[1]: Started serial-getty@ttyAMA0.service. Nov 1 00:17:04.560034 systemd[1]: Reached target getty.target. Nov 1 00:17:04.566028 systemd[1]: Reached target multi-user.target. Nov 1 00:17:04.571961 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 00:17:04.579996 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 00:17:04.580200 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 00:17:04.587426 systemd[1]: Startup finished in 20.542s (kernel) + 53.822s (userspace) = 1min 14.365s. Nov 1 00:17:05.232264 locksmithd[1668]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:17:05.708246 login[1705]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Nov 1 00:17:05.740175 login[1704]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 00:17:05.904642 systemd[1]: Created slice user-500.slice. Nov 1 00:17:05.905630 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 00:17:05.907812 systemd-logind[1567]: New session 1 of user core. Nov 1 00:17:05.961969 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 00:17:05.963174 systemd[1]: Starting user@500.service... Nov 1 00:17:06.032001 (systemd)[1711]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:17:06.572088 systemd[1711]: Queued start job for default target default.target. Nov 1 00:17:06.572745 systemd[1711]: Reached target paths.target. Nov 1 00:17:06.572774 systemd[1711]: Reached target sockets.target. Nov 1 00:17:06.572786 systemd[1711]: Reached target timers.target. Nov 1 00:17:06.572796 systemd[1711]: Reached target basic.target. Nov 1 00:17:06.572916 systemd[1]: Started user@500.service. Nov 1 00:17:06.573764 systemd[1]: Started session-1.scope. Nov 1 00:17:06.573960 systemd[1711]: Reached target default.target. Nov 1 00:17:06.574115 systemd[1711]: Startup finished in 536ms. Nov 1 00:17:06.709704 login[1705]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 00:17:06.713340 systemd-logind[1567]: New session 2 of user core. Nov 1 00:17:06.713943 systemd[1]: Started session-2.scope. Nov 1 00:17:14.200221 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:17:14.200415 systemd[1]: Stopped kubelet.service. Nov 1 00:17:14.201800 systemd[1]: Starting kubelet.service... Nov 1 00:17:14.902619 systemd[1]: Started kubelet.service. Nov 1 00:17:14.940838 kubelet[1741]: E1101 00:17:14.940800 1741 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:17:14.943096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:17:14.943241 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:17:20.024724 waagent[1699]: 2025-11-01T00:17:20.024621Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Nov 1 00:17:20.066676 waagent[1699]: 2025-11-01T00:17:20.066585Z INFO Daemon Daemon OS: flatcar 3510.3.8 Nov 1 00:17:20.071898 waagent[1699]: 2025-11-01T00:17:20.071825Z INFO Daemon Daemon Python: 3.9.16 Nov 1 00:17:20.077342 waagent[1699]: 2025-11-01T00:17:20.077254Z INFO Daemon Daemon Run daemon Nov 1 00:17:20.082420 waagent[1699]: 2025-11-01T00:17:20.082360Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.8' Nov 1 00:17:20.115632 waagent[1699]: 2025-11-01T00:17:20.115503Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Nov 1 00:17:20.131696 waagent[1699]: 2025-11-01T00:17:20.131562Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 1 00:17:20.142882 waagent[1699]: 2025-11-01T00:17:20.142794Z INFO Daemon Daemon cloud-init is enabled: False Nov 1 00:17:20.148479 waagent[1699]: 2025-11-01T00:17:20.148404Z INFO Daemon Daemon Using waagent for provisioning Nov 1 00:17:20.155029 waagent[1699]: 2025-11-01T00:17:20.154957Z INFO Daemon Daemon Activate resource disk Nov 1 00:17:20.160375 waagent[1699]: 2025-11-01T00:17:20.160306Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 1 00:17:20.175656 waagent[1699]: 2025-11-01T00:17:20.175587Z INFO Daemon Daemon Found device: None Nov 1 00:17:20.180844 waagent[1699]: 2025-11-01T00:17:20.180780Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 1 00:17:20.191345 waagent[1699]: 2025-11-01T00:17:20.191244Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 1 00:17:20.205309 waagent[1699]: 2025-11-01T00:17:20.205224Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 1 00:17:20.211782 waagent[1699]: 2025-11-01T00:17:20.211722Z INFO Daemon Daemon Running default provisioning handler Nov 1 00:17:20.225535 waagent[1699]: 2025-11-01T00:17:20.225417Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Nov 1 00:17:20.241819 waagent[1699]: 2025-11-01T00:17:20.241706Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 1 00:17:20.252744 waagent[1699]: 2025-11-01T00:17:20.252677Z INFO Daemon Daemon cloud-init is enabled: False Nov 1 00:17:20.258109 waagent[1699]: 2025-11-01T00:17:20.258048Z INFO Daemon Daemon Copying ovf-env.xml Nov 1 00:17:20.470155 waagent[1699]: 2025-11-01T00:17:20.468714Z INFO Daemon Daemon Successfully mounted dvd Nov 1 00:17:21.303385 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 1 00:17:22.774543 waagent[1699]: 2025-11-01T00:17:22.774395Z INFO Daemon Daemon Detect protocol endpoint Nov 1 00:17:22.779685 waagent[1699]: 2025-11-01T00:17:22.779614Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 1 00:17:22.785662 waagent[1699]: 2025-11-01T00:17:22.785604Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 1 00:17:22.792761 waagent[1699]: 2025-11-01T00:17:22.792704Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 1 00:17:22.798434 waagent[1699]: 2025-11-01T00:17:22.798379Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 1 00:17:22.803718 waagent[1699]: 2025-11-01T00:17:22.803663Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 1 00:17:24.260327 waagent[1699]: 2025-11-01T00:17:24.260239Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 1 00:17:24.267811 waagent[1699]: 2025-11-01T00:17:24.267765Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 1 00:17:24.273516 waagent[1699]: 2025-11-01T00:17:24.273457Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 1 00:17:24.950227 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:17:24.950417 systemd[1]: Stopped kubelet.service. Nov 1 00:17:24.951767 systemd[1]: Starting kubelet.service... Nov 1 00:17:25.685040 systemd[1]: Started kubelet.service. Nov 1 00:17:25.736511 kubelet[1772]: E1101 00:17:25.736474 1772 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:17:25.738328 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:17:25.738457 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:17:26.022721 waagent[1699]: 2025-11-01T00:17:26.022582Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 1 00:17:26.038451 waagent[1699]: 2025-11-01T00:17:26.038385Z INFO Daemon Daemon Forcing an update of the goal state.. Nov 1 00:17:26.044984 waagent[1699]: 2025-11-01T00:17:26.044921Z INFO Daemon Daemon Fetching goal state [incarnation 1] Nov 1 00:17:26.126589 waagent[1699]: 2025-11-01T00:17:26.126469Z INFO Daemon Daemon Found private key matching thumbprint 8666A484BF2972EFBD3DA22FA0727CBE2864DB2D Nov 1 00:17:26.135510 waagent[1699]: 2025-11-01T00:17:26.135434Z INFO Daemon Daemon Fetch goal state completed Nov 1 00:17:26.198616 waagent[1699]: 2025-11-01T00:17:26.198558Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 3a9490a6-ffca-4b44-bd94-8120463241e3 New eTag: 14050578838232917432] Nov 1 00:17:26.210253 waagent[1699]: 2025-11-01T00:17:26.210167Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Nov 1 00:17:26.228414 waagent[1699]: 2025-11-01T00:17:26.228326Z INFO Daemon Daemon Starting provisioning Nov 1 00:17:26.233748 waagent[1699]: 2025-11-01T00:17:26.233680Z INFO Daemon Daemon Handle ovf-env.xml. Nov 1 00:17:26.238719 waagent[1699]: 2025-11-01T00:17:26.238658Z INFO Daemon Daemon Set hostname [ci-3510.3.8-n-c51a7922c9] Nov 1 00:17:26.449673 waagent[1699]: 2025-11-01T00:17:26.449532Z INFO Daemon Daemon Publish hostname [ci-3510.3.8-n-c51a7922c9] Nov 1 00:17:26.457217 waagent[1699]: 2025-11-01T00:17:26.457132Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 1 00:17:26.464453 waagent[1699]: 2025-11-01T00:17:26.464380Z INFO Daemon Daemon Primary interface is [eth0] Nov 1 00:17:26.480971 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Nov 1 00:17:26.481178 systemd[1]: Stopped systemd-networkd-wait-online.service. Nov 1 00:17:26.481231 systemd[1]: Stopping systemd-networkd-wait-online.service... Nov 1 00:17:26.481444 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:17:26.485337 systemd-networkd[1304]: eth0: DHCPv6 lease lost Nov 1 00:17:26.486659 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:17:26.486901 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:17:26.488972 systemd[1]: Starting systemd-networkd.service... Nov 1 00:17:26.522221 systemd-networkd[1788]: enP30301s1: Link UP Nov 1 00:17:26.522235 systemd-networkd[1788]: enP30301s1: Gained carrier Nov 1 00:17:26.523274 systemd-networkd[1788]: eth0: Link UP Nov 1 00:17:26.523412 systemd-networkd[1788]: eth0: Gained carrier Nov 1 00:17:26.523777 systemd-networkd[1788]: lo: Link UP Nov 1 00:17:26.523787 systemd-networkd[1788]: lo: Gained carrier Nov 1 00:17:26.524020 systemd-networkd[1788]: eth0: Gained IPv6LL Nov 1 00:17:26.525159 systemd-networkd[1788]: Enumeration completed Nov 1 00:17:26.525317 systemd[1]: Started systemd-networkd.service. Nov 1 00:17:26.526796 waagent[1699]: 2025-11-01T00:17:26.526651Z INFO Daemon Daemon Create user account if not exists Nov 1 00:17:26.527085 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:17:26.533806 waagent[1699]: 2025-11-01T00:17:26.533723Z INFO Daemon Daemon User core already exists, skip useradd Nov 1 00:17:26.540204 waagent[1699]: 2025-11-01T00:17:26.540099Z INFO Daemon Daemon Configure sudoer Nov 1 00:17:26.541327 systemd-networkd[1788]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:17:26.558413 systemd-networkd[1788]: eth0: DHCPv4 address 10.200.20.42/24, gateway 10.200.20.1 acquired from 168.63.129.16 Nov 1 00:17:26.560900 waagent[1699]: 2025-11-01T00:17:26.560794Z INFO Daemon Daemon Configure sshd Nov 1 00:17:26.565880 waagent[1699]: 2025-11-01T00:17:26.565539Z INFO Daemon Daemon Deploy ssh public key. Nov 1 00:17:26.566509 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:17:27.269675 waagent[1699]: 2025-11-01T00:17:27.269577Z INFO Daemon Daemon Provisioning complete Nov 1 00:17:27.289990 waagent[1699]: 2025-11-01T00:17:27.289923Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 1 00:17:27.296806 waagent[1699]: 2025-11-01T00:17:27.296724Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 1 00:17:27.313250 waagent[1699]: 2025-11-01T00:17:27.313170Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Nov 1 00:17:27.609884 waagent[1795]: 2025-11-01T00:17:27.609742Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Nov 1 00:17:27.610969 waagent[1795]: 2025-11-01T00:17:27.610915Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:17:27.611209 waagent[1795]: 2025-11-01T00:17:27.611162Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:17:27.623541 waagent[1795]: 2025-11-01T00:17:27.623479Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Nov 1 00:17:27.623801 waagent[1795]: 2025-11-01T00:17:27.623755Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Nov 1 00:17:27.681069 waagent[1795]: 2025-11-01T00:17:27.680929Z INFO ExtHandler ExtHandler Found private key matching thumbprint 8666A484BF2972EFBD3DA22FA0727CBE2864DB2D Nov 1 00:17:27.681553 waagent[1795]: 2025-11-01T00:17:27.681501Z INFO ExtHandler ExtHandler Fetch goal state completed Nov 1 00:17:27.695969 waagent[1795]: 2025-11-01T00:17:27.695916Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: c666e59f-1955-461c-955c-86836b852c29 New eTag: 14050578838232917432] Nov 1 00:17:27.696685 waagent[1795]: 2025-11-01T00:17:27.696630Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Nov 1 00:17:28.374551 waagent[1795]: 2025-11-01T00:17:28.374412Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Nov 1 00:17:28.400454 waagent[1795]: 2025-11-01T00:17:28.400358Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1795 Nov 1 00:17:28.404107 waagent[1795]: 2025-11-01T00:17:28.404042Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Nov 1 00:17:28.405364 waagent[1795]: 2025-11-01T00:17:28.405306Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 1 00:17:29.488537 waagent[1795]: 2025-11-01T00:17:29.488468Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 1 00:17:29.488944 waagent[1795]: 2025-11-01T00:17:29.488885Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 1 00:17:29.496736 waagent[1795]: 2025-11-01T00:17:29.496675Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 1 00:17:29.497225 waagent[1795]: 2025-11-01T00:17:29.497166Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Nov 1 00:17:29.498364 waagent[1795]: 2025-11-01T00:17:29.498306Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Nov 1 00:17:29.499636 waagent[1795]: 2025-11-01T00:17:29.499568Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 1 00:17:29.500246 waagent[1795]: 2025-11-01T00:17:29.500188Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:17:29.500540 waagent[1795]: 2025-11-01T00:17:29.500489Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:17:29.501161 waagent[1795]: 2025-11-01T00:17:29.501105Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 1 00:17:29.501602 waagent[1795]: 2025-11-01T00:17:29.501546Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 1 00:17:29.501602 waagent[1795]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 1 00:17:29.501602 waagent[1795]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Nov 1 00:17:29.501602 waagent[1795]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 1 00:17:29.501602 waagent[1795]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:17:29.501602 waagent[1795]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:17:29.501602 waagent[1795]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:17:29.503880 waagent[1795]: 2025-11-01T00:17:29.503719Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 1 00:17:29.504659 waagent[1795]: 2025-11-01T00:17:29.504599Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:17:29.504914 waagent[1795]: 2025-11-01T00:17:29.504867Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:17:29.505581 waagent[1795]: 2025-11-01T00:17:29.505519Z INFO EnvHandler ExtHandler Configure routes Nov 1 00:17:29.505816 waagent[1795]: 2025-11-01T00:17:29.505768Z INFO EnvHandler ExtHandler Gateway:None Nov 1 00:17:29.506010 waagent[1795]: 2025-11-01T00:17:29.505967Z INFO EnvHandler ExtHandler Routes:None Nov 1 00:17:29.506928 waagent[1795]: 2025-11-01T00:17:29.506871Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 1 00:17:29.507017 waagent[1795]: 2025-11-01T00:17:29.506951Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 1 00:17:29.507745 waagent[1795]: 2025-11-01T00:17:29.507665Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 1 00:17:29.507832 waagent[1795]: 2025-11-01T00:17:29.507768Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 1 00:17:29.509186 waagent[1795]: 2025-11-01T00:17:29.509115Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 1 00:17:29.518633 waagent[1795]: 2025-11-01T00:17:29.518564Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Nov 1 00:17:29.519314 waagent[1795]: 2025-11-01T00:17:29.519239Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Nov 1 00:17:29.521229 waagent[1795]: 2025-11-01T00:17:29.521164Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Nov 1 00:17:29.580601 waagent[1795]: 2025-11-01T00:17:29.580539Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Nov 1 00:17:29.597972 waagent[1795]: 2025-11-01T00:17:29.597830Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1788' Nov 1 00:17:29.741766 waagent[1795]: 2025-11-01T00:17:29.741602Z INFO MonitorHandler ExtHandler Network interfaces: Nov 1 00:17:29.741766 waagent[1795]: Executing ['ip', '-a', '-o', 'link']: Nov 1 00:17:29.741766 waagent[1795]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 1 00:17:29.741766 waagent[1795]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:07:53:14 brd ff:ff:ff:ff:ff:ff Nov 1 00:17:29.741766 waagent[1795]: 3: enP30301s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:07:53:14 brd ff:ff:ff:ff:ff:ff\ altname enP30301p0s2 Nov 1 00:17:29.741766 waagent[1795]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 1 00:17:29.741766 waagent[1795]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 1 00:17:29.741766 waagent[1795]: 2: eth0 inet 10.200.20.42/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 1 00:17:29.741766 waagent[1795]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 1 00:17:29.741766 waagent[1795]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Nov 1 00:17:29.741766 waagent[1795]: 2: eth0 inet6 fe80::20d:3aff:fe07:5314/64 scope link \ valid_lft forever preferred_lft forever Nov 1 00:17:30.109403 waagent[1795]: 2025-11-01T00:17:30.109292Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.15.0.1 -- exiting Nov 1 00:17:30.318996 waagent[1699]: 2025-11-01T00:17:30.318849Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Nov 1 00:17:30.325096 waagent[1699]: 2025-11-01T00:17:30.325041Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.15.0.1 to be the latest agent Nov 1 00:17:31.658074 waagent[1822]: 2025-11-01T00:17:31.657987Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.15.0.1) Nov 1 00:17:31.659134 waagent[1822]: 2025-11-01T00:17:31.659080Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.8 Nov 1 00:17:31.659410 waagent[1822]: 2025-11-01T00:17:31.659363Z INFO ExtHandler ExtHandler Python: 3.9.16 Nov 1 00:17:31.659645 waagent[1822]: 2025-11-01T00:17:31.659601Z INFO ExtHandler ExtHandler CPU Arch: aarch64 Nov 1 00:17:31.672773 waagent[1822]: 2025-11-01T00:17:31.672665Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.8; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: aarch64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; Nov 1 00:17:31.673377 waagent[1822]: 2025-11-01T00:17:31.673323Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:17:31.673643 waagent[1822]: 2025-11-01T00:17:31.673597Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:17:31.673962 waagent[1822]: 2025-11-01T00:17:31.673914Z INFO ExtHandler ExtHandler Initializing the goal state... Nov 1 00:17:31.687639 waagent[1822]: 2025-11-01T00:17:31.687569Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 1 00:17:31.700009 waagent[1822]: 2025-11-01T00:17:31.699953Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Nov 1 00:17:31.701212 waagent[1822]: 2025-11-01T00:17:31.701160Z INFO ExtHandler Nov 1 00:17:31.701490 waagent[1822]: 2025-11-01T00:17:31.701440Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 63633a32-5a4a-4296-a26e-53ce13363bc4 eTag: 14050578838232917432 source: Fabric] Nov 1 00:17:31.702346 waagent[1822]: 2025-11-01T00:17:31.702276Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 1 00:17:31.703689 waagent[1822]: 2025-11-01T00:17:31.703634Z INFO ExtHandler Nov 1 00:17:31.703936 waagent[1822]: 2025-11-01T00:17:31.703890Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 1 00:17:31.713886 waagent[1822]: 2025-11-01T00:17:31.713829Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 1 00:17:31.714611 waagent[1822]: 2025-11-01T00:17:31.714565Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Nov 1 00:17:31.735280 waagent[1822]: 2025-11-01T00:17:31.735217Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Nov 1 00:17:31.797843 waagent[1822]: 2025-11-01T00:17:31.797715Z INFO ExtHandler Downloaded certificate {'thumbprint': '8666A484BF2972EFBD3DA22FA0727CBE2864DB2D', 'hasPrivateKey': True} Nov 1 00:17:31.799403 waagent[1822]: 2025-11-01T00:17:31.799346Z INFO ExtHandler Fetch goal state from WireServer completed Nov 1 00:17:31.800429 waagent[1822]: 2025-11-01T00:17:31.800375Z INFO ExtHandler ExtHandler Goal state initialization completed. Nov 1 00:17:31.821032 waagent[1822]: 2025-11-01T00:17:31.820936Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) Nov 1 00:17:31.828870 waagent[1822]: 2025-11-01T00:17:31.828778Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Nov 1 00:17:32.414609 waagent[1822]: 2025-11-01T00:17:32.414486Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] Nov 1 00:17:32.414868 waagent[1822]: 2025-11-01T00:17:32.414816Z INFO ExtHandler ExtHandler Checking state of the firewall Nov 1 00:17:33.923518 waagent[1822]: 2025-11-01T00:17:33.923396Z INFO ExtHandler ExtHandler Created firewall rules for Azure Fabric: Nov 1 00:17:33.923518 waagent[1822]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:17:33.923518 waagent[1822]: pkts bytes target prot opt in out source destination Nov 1 00:17:33.923518 waagent[1822]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:17:33.923518 waagent[1822]: pkts bytes target prot opt in out source destination Nov 1 00:17:33.923518 waagent[1822]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:17:33.923518 waagent[1822]: pkts bytes target prot opt in out source destination Nov 1 00:17:33.923518 waagent[1822]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 1 00:17:33.923518 waagent[1822]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 1 00:17:33.923518 waagent[1822]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 1 00:17:33.924652 waagent[1822]: 2025-11-01T00:17:33.924590Z INFO ExtHandler ExtHandler Setting up persistent firewall rules Nov 1 00:17:33.927279 waagent[1822]: 2025-11-01T00:17:33.927165Z INFO ExtHandler ExtHandler The firewalld service is not present on the system Nov 1 00:17:33.927736 waagent[1822]: 2025-11-01T00:17:33.927677Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up /lib/systemd/system/waagent-network-setup.service Nov 1 00:17:33.928092 waagent[1822]: 2025-11-01T00:17:33.928036Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 1 00:17:33.935517 waagent[1822]: 2025-11-01T00:17:33.935461Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 1 00:17:33.936001 waagent[1822]: 2025-11-01T00:17:33.935941Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Nov 1 00:17:33.944050 waagent[1822]: 2025-11-01T00:17:33.943992Z INFO ExtHandler ExtHandler WALinuxAgent-2.15.0.1 running as process 1822 Nov 1 00:17:33.947349 waagent[1822]: 2025-11-01T00:17:33.947277Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.8', '', 'Flatcar Container Linux by Kinvolk'] Nov 1 00:17:33.948139 waagent[1822]: 2025-11-01T00:17:33.948079Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled Nov 1 00:17:33.949045 waagent[1822]: 2025-11-01T00:17:33.948990Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Nov 1 00:17:33.951856 waagent[1822]: 2025-11-01T00:17:33.951799Z INFO ExtHandler ExtHandler Signing certificate written to /var/lib/waagent/microsoft_root_certificate.pem Nov 1 00:17:33.952190 waagent[1822]: 2025-11-01T00:17:33.952140Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Nov 1 00:17:33.953910 waagent[1822]: 2025-11-01T00:17:33.953843Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 1 00:17:33.954611 waagent[1822]: 2025-11-01T00:17:33.954551Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:17:33.954889 waagent[1822]: 2025-11-01T00:17:33.954840Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:17:33.955539 waagent[1822]: 2025-11-01T00:17:33.955479Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 1 00:17:33.956154 waagent[1822]: 2025-11-01T00:17:33.956080Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 1 00:17:33.956884 waagent[1822]: 2025-11-01T00:17:33.956815Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:17:33.957102 waagent[1822]: 2025-11-01T00:17:33.957044Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 1 00:17:33.957102 waagent[1822]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 1 00:17:33.957102 waagent[1822]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Nov 1 00:17:33.957102 waagent[1822]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 1 00:17:33.957102 waagent[1822]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:17:33.957102 waagent[1822]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:17:33.957102 waagent[1822]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:17:33.957704 waagent[1822]: 2025-11-01T00:17:33.957558Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 1 00:17:33.957796 waagent[1822]: 2025-11-01T00:17:33.957730Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 1 00:17:33.958222 waagent[1822]: 2025-11-01T00:17:33.958161Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:17:33.960725 waagent[1822]: 2025-11-01T00:17:33.960629Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 1 00:17:33.961222 waagent[1822]: 2025-11-01T00:17:33.961149Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 1 00:17:33.961773 waagent[1822]: 2025-11-01T00:17:33.961712Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 1 00:17:33.965156 waagent[1822]: 2025-11-01T00:17:33.965024Z INFO EnvHandler ExtHandler Configure routes Nov 1 00:17:33.965495 waagent[1822]: 2025-11-01T00:17:33.965438Z INFO EnvHandler ExtHandler Gateway:None Nov 1 00:17:33.965638 waagent[1822]: 2025-11-01T00:17:33.965595Z INFO EnvHandler ExtHandler Routes:None Nov 1 00:17:33.985124 waagent[1822]: 2025-11-01T00:17:33.985057Z INFO ExtHandler ExtHandler Downloading agent manifest Nov 1 00:17:33.987643 waagent[1822]: 2025-11-01T00:17:33.987579Z INFO MonitorHandler ExtHandler Network interfaces: Nov 1 00:17:33.987643 waagent[1822]: Executing ['ip', '-a', '-o', 'link']: Nov 1 00:17:33.987643 waagent[1822]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 1 00:17:33.987643 waagent[1822]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:07:53:14 brd ff:ff:ff:ff:ff:ff Nov 1 00:17:33.987643 waagent[1822]: 3: enP30301s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:07:53:14 brd ff:ff:ff:ff:ff:ff\ altname enP30301p0s2 Nov 1 00:17:33.987643 waagent[1822]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 1 00:17:33.987643 waagent[1822]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 1 00:17:33.987643 waagent[1822]: 2: eth0 inet 10.200.20.42/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 1 00:17:33.987643 waagent[1822]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 1 00:17:33.987643 waagent[1822]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Nov 1 00:17:33.987643 waagent[1822]: 2: eth0 inet6 fe80::20d:3aff:fe07:5314/64 scope link \ valid_lft forever preferred_lft forever Nov 1 00:17:34.006031 waagent[1822]: 2025-11-01T00:17:34.005944Z INFO ExtHandler ExtHandler Nov 1 00:17:34.007128 waagent[1822]: 2025-11-01T00:17:34.007009Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 95d27153-657a-43dc-ad57-0b08f37ab43c correlation 66aa87de-91be-4efa-b53d-5dd60ab4ae01 created: 2025-11-01T00:14:58.771919Z] Nov 1 00:17:34.014011 waagent[1822]: 2025-11-01T00:17:34.013935Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 1 00:17:34.020815 waagent[1822]: 2025-11-01T00:17:34.020746Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 14 ms] Nov 1 00:17:34.044964 waagent[1822]: 2025-11-01T00:17:34.044890Z INFO ExtHandler ExtHandler Looking for existing remote access users. Nov 1 00:17:34.047239 waagent[1822]: 2025-11-01T00:17:34.047170Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules Nov 1 00:17:34.051371 waagent[1822]: 2025-11-01T00:17:34.051188Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.15.0.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: E12268CA-9B2A-45A5-B797-15DFC70B384F;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] Nov 1 00:17:34.060914 waagent[1822]: 2025-11-01T00:17:34.060846Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Nov 1 00:17:35.950137 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 1 00:17:35.950337 systemd[1]: Stopped kubelet.service. Nov 1 00:17:35.951748 systemd[1]: Starting kubelet.service... Nov 1 00:17:36.093260 systemd[1]: Started kubelet.service. Nov 1 00:17:36.131737 kubelet[1873]: E1101 00:17:36.131669 1873 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:17:36.133557 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:17:36.133686 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:17:36.408918 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Nov 1 00:17:45.613725 systemd[1]: Created slice system-sshd.slice. Nov 1 00:17:45.614892 systemd[1]: Started sshd@0-10.200.20.42:22-10.200.16.10:53802.service. Nov 1 00:17:46.200152 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 1 00:17:46.200341 systemd[1]: Stopped kubelet.service. Nov 1 00:17:46.201710 systemd[1]: Starting kubelet.service... Nov 1 00:17:46.349567 systemd[1]: Started kubelet.service. Nov 1 00:17:46.381442 kubelet[1889]: E1101 00:17:46.381398 1889 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:17:46.383118 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:17:46.383248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:17:47.237867 sshd[1880]: Accepted publickey for core from 10.200.16.10 port 53802 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:17:47.261265 sshd[1880]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:17:47.265399 systemd[1]: Started session-3.scope. Nov 1 00:17:47.265855 systemd-logind[1567]: New session 3 of user core. Nov 1 00:17:47.608895 systemd[1]: Started sshd@1-10.200.20.42:22-10.200.16.10:53810.service. Nov 1 00:17:48.023974 sshd[1900]: Accepted publickey for core from 10.200.16.10 port 53810 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:17:48.025235 sshd[1900]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:17:48.029050 systemd-logind[1567]: New session 4 of user core. Nov 1 00:17:48.029446 systemd[1]: Started session-4.scope. Nov 1 00:17:48.336342 sshd[1900]: pam_unix(sshd:session): session closed for user core Nov 1 00:17:48.339657 systemd[1]: sshd@1-10.200.20.42:22-10.200.16.10:53810.service: Deactivated successfully. Nov 1 00:17:48.340748 systemd-logind[1567]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:17:48.340955 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:17:48.341800 systemd-logind[1567]: Removed session 4. Nov 1 00:17:48.421929 systemd[1]: Started sshd@2-10.200.20.42:22-10.200.16.10:53824.service. Nov 1 00:17:48.880030 sshd[1910]: Accepted publickey for core from 10.200.16.10 port 53824 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:17:48.881599 sshd[1910]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:17:48.885625 systemd[1]: Started session-5.scope. Nov 1 00:17:48.885899 systemd-logind[1567]: New session 5 of user core. Nov 1 00:17:49.211769 sshd[1910]: pam_unix(sshd:session): session closed for user core Nov 1 00:17:49.214410 systemd[1]: sshd@2-10.200.20.42:22-10.200.16.10:53824.service: Deactivated successfully. Nov 1 00:17:49.215052 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:17:49.215573 systemd-logind[1567]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:17:49.216277 systemd-logind[1567]: Removed session 5. Nov 1 00:17:49.222568 update_engine[1569]: I1101 00:17:49.222529 1569 update_attempter.cc:509] Updating boot flags... Nov 1 00:17:49.272152 systemd[1]: Started sshd@3-10.200.20.42:22-10.200.16.10:53830.service. Nov 1 00:17:49.691067 sshd[1919]: Accepted publickey for core from 10.200.16.10 port 53830 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:17:49.692611 sshd[1919]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:17:49.705402 systemd-logind[1567]: New session 6 of user core. Nov 1 00:17:49.707758 systemd[1]: Started session-6.scope. Nov 1 00:17:50.014887 sshd[1919]: pam_unix(sshd:session): session closed for user core Nov 1 00:17:50.017459 systemd[1]: sshd@3-10.200.20.42:22-10.200.16.10:53830.service: Deactivated successfully. Nov 1 00:17:50.018108 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:17:50.018460 systemd-logind[1567]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:17:50.019118 systemd-logind[1567]: Removed session 6. Nov 1 00:17:50.088222 systemd[1]: Started sshd@4-10.200.20.42:22-10.200.16.10:54364.service. Nov 1 00:17:50.542945 sshd[1960]: Accepted publickey for core from 10.200.16.10 port 54364 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:17:50.544498 sshd[1960]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:17:50.548498 systemd[1]: Started session-7.scope. Nov 1 00:17:50.548683 systemd-logind[1567]: New session 7 of user core. Nov 1 00:17:52.248346 sudo[1964]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:17:52.248564 sudo[1964]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:17:53.230478 dbus-daemon[1554]: avc: received setenforce notice (enforcing=1) Nov 1 00:17:53.232252 sudo[1964]: pam_unix(sudo:session): session closed for user root Nov 1 00:17:53.334547 sshd[1960]: pam_unix(sshd:session): session closed for user core Nov 1 00:17:53.337177 systemd[1]: sshd@4-10.200.20.42:22-10.200.16.10:54364.service: Deactivated successfully. Nov 1 00:17:53.338125 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:17:53.338494 systemd-logind[1567]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:17:53.339304 systemd-logind[1567]: Removed session 7. Nov 1 00:17:53.412621 systemd[1]: Started sshd@5-10.200.20.42:22-10.200.16.10:54374.service. Nov 1 00:17:53.871521 sshd[1968]: Accepted publickey for core from 10.200.16.10 port 54374 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:17:53.873189 sshd[1968]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:17:53.877366 systemd[1]: Started session-8.scope. Nov 1 00:17:53.878250 systemd-logind[1567]: New session 8 of user core. Nov 1 00:17:54.128710 sudo[1973]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:17:54.129560 sudo[1973]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:17:54.131951 sudo[1973]: pam_unix(sudo:session): session closed for user root Nov 1 00:17:54.135841 sudo[1972]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:17:54.136043 sudo[1972]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:17:54.144015 systemd[1]: Stopping audit-rules.service... Nov 1 00:17:54.144000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Nov 1 00:17:54.149097 kernel: kauditd_printk_skb: 34 callbacks suppressed Nov 1 00:17:54.149175 kernel: audit: type=1305 audit(1761956274.144:165): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Nov 1 00:17:54.149393 auditctl[1976]: No rules Nov 1 00:17:54.149838 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:17:54.150063 systemd[1]: Stopped audit-rules.service. Nov 1 00:17:54.152073 systemd[1]: Starting audit-rules.service... Nov 1 00:17:54.144000 audit[1976]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff2b326a0 a2=420 a3=0 items=0 ppid=1 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:54.184610 kernel: audit: type=1300 audit(1761956274.144:165): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff2b326a0 a2=420 a3=0 items=0 ppid=1 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:54.144000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Nov 1 00:17:54.191637 kernel: audit: type=1327 audit(1761956274.144:165): proctitle=2F7362696E2F617564697463746C002D44 Nov 1 00:17:54.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:17:54.208099 kernel: audit: type=1131 audit(1761956274.149:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:17:54.208166 augenrules[1994]: No rules Nov 1 00:17:54.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:17:54.208756 systemd[1]: Finished audit-rules.service. Nov 1 00:17:54.210363 sudo[1972]: pam_unix(sudo:session): session closed for user root Nov 1 00:17:54.225514 kernel: audit: type=1130 audit(1761956274.208:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:17:54.210000 audit[1972]: USER_END pid=1972 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:17:54.244290 kernel: audit: type=1106 audit(1761956274.210:168): pid=1972 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:17:54.210000 audit[1972]: CRED_DISP pid=1972 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:17:54.261041 kernel: audit: type=1104 audit(1761956274.210:169): pid=1972 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:17:54.304249 sshd[1968]: pam_unix(sshd:session): session closed for user core Nov 1 00:17:54.304000 audit[1968]: USER_END pid=1968 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:17:54.307170 systemd-logind[1567]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:17:54.308052 systemd[1]: sshd@5-10.200.20.42:22-10.200.16.10:54374.service: Deactivated successfully. Nov 1 00:17:54.308875 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:17:54.310364 systemd-logind[1567]: Removed session 8. Nov 1 00:17:54.304000 audit[1968]: CRED_DISP pid=1968 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:17:54.347140 kernel: audit: type=1106 audit(1761956274.304:170): pid=1968 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:17:54.347258 kernel: audit: type=1104 audit(1761956274.304:171): pid=1968 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:17:54.347296 kernel: audit: type=1131 audit(1761956274.304:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.42:22-10.200.16.10:54374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:17:54.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.42:22-10.200.16.10:54374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:17:54.373672 systemd[1]: Started sshd@6-10.200.20.42:22-10.200.16.10:54380.service. Nov 1 00:17:54.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.42:22-10.200.16.10:54380 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:17:54.791000 audit[2001]: USER_ACCT pid=2001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:17:54.791968 sshd[2001]: Accepted publickey for core from 10.200.16.10 port 54380 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:17:54.792000 audit[2001]: CRED_ACQ pid=2001 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:17:54.792000 audit[2001]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe6a68300 a2=3 a3=1 items=0 ppid=1 pid=2001 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:54.792000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:17:54.793543 sshd[2001]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:17:54.797631 systemd[1]: Started session-9.scope. Nov 1 00:17:54.798335 systemd-logind[1567]: New session 9 of user core. Nov 1 00:17:54.803000 audit[2001]: USER_START pid=2001 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:17:54.804000 audit[2004]: CRED_ACQ pid=2004 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:17:55.029000 audit[2005]: USER_ACCT pid=2005 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:17:55.030006 sudo[2005]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:17:55.029000 audit[2005]: CRED_REFR pid=2005 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:17:55.030225 sudo[2005]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:17:55.031000 audit[2005]: USER_START pid=2005 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:17:55.067872 systemd[1]: Starting docker.service... Nov 1 00:17:55.621897 env[2015]: time="2025-11-01T00:17:55.621854248Z" level=info msg="Starting up" Nov 1 00:17:55.623504 env[2015]: time="2025-11-01T00:17:55.623482124Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:17:55.623603 env[2015]: time="2025-11-01T00:17:55.623589283Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:17:55.623678 env[2015]: time="2025-11-01T00:17:55.623663683Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:17:55.623731 env[2015]: time="2025-11-01T00:17:55.623719643Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:17:55.625394 env[2015]: time="2025-11-01T00:17:55.625374039Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:17:55.625481 env[2015]: time="2025-11-01T00:17:55.625468359Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:17:55.625545 env[2015]: time="2025-11-01T00:17:55.625530119Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:17:55.625600 env[2015]: time="2025-11-01T00:17:55.625587359Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:17:55.630528 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport440886451-merged.mount: Deactivated successfully. Nov 1 00:17:55.801272 env[2015]: time="2025-11-01T00:17:55.801239417Z" level=warning msg="Your kernel does not support cgroup blkio weight" Nov 1 00:17:55.801474 env[2015]: time="2025-11-01T00:17:55.801460256Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Nov 1 00:17:55.801666 env[2015]: time="2025-11-01T00:17:55.801652056Z" level=info msg="Loading containers: start." Nov 1 00:17:56.450203 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 1 00:17:56.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:17:56.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:17:56.450395 systemd[1]: Stopped kubelet.service. Nov 1 00:17:56.451828 systemd[1]: Starting kubelet.service... Nov 1 00:17:56.756000 audit[2046]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:56.756000 audit[2046]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffeb287f20 a2=0 a3=1 items=0 ppid=2015 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:56.756000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Nov 1 00:17:56.758000 audit[2048]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:56.758000 audit[2048]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffea24df40 a2=0 a3=1 items=0 ppid=2015 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:56.758000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Nov 1 00:17:56.760000 audit[2050]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:56.760000 audit[2050]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff9551d30 a2=0 a3=1 items=0 ppid=2015 pid=2050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:56.760000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Nov 1 00:17:56.762000 audit[2052]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:56.762000 audit[2052]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff9452c50 a2=0 a3=1 items=0 ppid=2015 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:56.762000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Nov 1 00:17:56.763000 audit[2054]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:56.763000 audit[2054]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd6b43f80 a2=0 a3=1 items=0 ppid=2015 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:56.763000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Nov 1 00:17:56.765000 audit[2056]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_rule pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:56.765000 audit[2056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffea7daf0 a2=0 a3=1 items=0 ppid=2015 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:56.765000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Nov 1 00:17:56.851000 audit[2058]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:56.851000 audit[2058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff8c05430 a2=0 a3=1 items=0 ppid=2015 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:56.851000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Nov 1 00:17:56.853000 audit[2060]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:56.853000 audit[2060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffdabcb5f0 a2=0 a3=1 items=0 ppid=2015 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:56.853000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Nov 1 00:17:56.854000 audit[2062]: NETFILTER_CFG table=filter:13 family=2 entries=2 op=nft_register_chain pid=2062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:56.854000 audit[2062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=fffff8798770 a2=0 a3=1 items=0 ppid=2015 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:56.854000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:17:56.894000 audit[2066]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_unregister_rule pid=2066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:56.894000 audit[2066]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffd3642390 a2=0 a3=1 items=0 ppid=2015 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:56.894000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:17:56.898000 audit[2067]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:56.898000 audit[2067]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffcffc5200 a2=0 a3=1 items=0 ppid=2015 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:56.898000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:17:57.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:17:57.048957 systemd[1]: Started kubelet.service. Nov 1 00:17:57.077319 kernel: Initializing XFRM netlink socket Nov 1 00:17:57.092194 kubelet[2074]: E1101 00:17:57.092141 2074 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:17:57.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:17:57.093838 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:17:57.093986 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:17:57.968173 env[2015]: time="2025-11-01T00:17:57.968127127Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 00:17:58.111000 audit[2087]: NETFILTER_CFG table=nat:16 family=2 entries=2 op=nft_register_chain pid=2087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:58.111000 audit[2087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffeed06360 a2=0 a3=1 items=0 ppid=2015 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:58.111000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Nov 1 00:17:58.165000 audit[2090]: NETFILTER_CFG table=nat:17 family=2 entries=1 op=nft_register_rule pid=2090 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:58.165000 audit[2090]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffff9371190 a2=0 a3=1 items=0 ppid=2015 pid=2090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:58.165000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Nov 1 00:17:58.168000 audit[2093]: NETFILTER_CFG table=filter:18 family=2 entries=1 op=nft_register_rule pid=2093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:58.168000 audit[2093]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffc01184d0 a2=0 a3=1 items=0 ppid=2015 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:58.168000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Nov 1 00:17:58.169000 audit[2095]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:58.169000 audit[2095]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd40159d0 a2=0 a3=1 items=0 ppid=2015 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:58.169000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Nov 1 00:17:58.171000 audit[2097]: NETFILTER_CFG table=nat:20 family=2 entries=2 op=nft_register_chain pid=2097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:58.171000 audit[2097]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffc82b3fb0 a2=0 a3=1 items=0 ppid=2015 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:58.171000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Nov 1 00:17:58.173000 audit[2099]: NETFILTER_CFG table=nat:21 family=2 entries=2 op=nft_register_chain pid=2099 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:58.173000 audit[2099]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffe17e3670 a2=0 a3=1 items=0 ppid=2015 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:58.173000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Nov 1 00:17:58.175000 audit[2101]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:58.175000 audit[2101]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=fffffe0d0300 a2=0 a3=1 items=0 ppid=2015 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:58.175000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Nov 1 00:17:58.176000 audit[2103]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:58.176000 audit[2103]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffe068eb70 a2=0 a3=1 items=0 ppid=2015 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:58.176000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Nov 1 00:17:58.178000 audit[2105]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=2105 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:58.178000 audit[2105]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=fffff68ca000 a2=0 a3=1 items=0 ppid=2015 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:58.178000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Nov 1 00:17:58.180000 audit[2107]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:58.180000 audit[2107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=fffffec2d540 a2=0 a3=1 items=0 ppid=2015 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:58.180000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Nov 1 00:17:58.182000 audit[2109]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=2109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:58.182000 audit[2109]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffede64fb0 a2=0 a3=1 items=0 ppid=2015 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:58.182000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Nov 1 00:17:58.182910 systemd-networkd[1788]: docker0: Link UP Nov 1 00:17:58.201000 audit[2113]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_unregister_rule pid=2113 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:58.201000 audit[2113]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffea721040 a2=0 a3=1 items=0 ppid=2015 pid=2113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:58.201000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:17:58.216000 audit[2114]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_rule pid=2114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:17:58.216000 audit[2114]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe54d7b30 a2=0 a3=1 items=0 ppid=2015 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:17:58.216000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:17:58.217613 env[2015]: time="2025-11-01T00:17:58.217588949Z" level=info msg="Loading containers: done." Nov 1 00:17:58.305155 env[2015]: time="2025-11-01T00:17:58.304328514Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:17:58.305502 env[2015]: time="2025-11-01T00:17:58.305483751Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 00:17:58.305667 env[2015]: time="2025-11-01T00:17:58.305653871Z" level=info msg="Daemon has completed initialization" Nov 1 00:17:58.342492 systemd[1]: Started docker.service. Nov 1 00:17:58.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:17:58.344684 env[2015]: time="2025-11-01T00:17:58.344639816Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:18:02.693059 env[1586]: time="2025-11-01T00:18:02.693006760Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 00:18:02.828837 env[1586]: time="2025-11-01T00:18:02.828728517Z" level=info msg="trying next host" error="failed to do request: Head \"https://us-west1-docker.pkg.dev/v2/k8s-artifacts-prod/images/kube-apiserver/manifests/v1.32.9\": dial tcp: lookup us-west1-docker.pkg.dev: no such host" host=registry.k8s.io Nov 1 00:18:02.839947 env[1586]: time="2025-11-01T00:18:02.839896674Z" level=error msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" failed" error="failed to pull and unpack image \"registry.k8s.io/kube-apiserver:v1.32.9\": failed to resolve reference \"registry.k8s.io/kube-apiserver:v1.32.9\": failed to do request: Head \"https://us-west1-docker.pkg.dev/v2/k8s-artifacts-prod/images/kube-apiserver/manifests/v1.32.9\": dial tcp: lookup us-west1-docker.pkg.dev: no such host" Nov 1 00:18:02.840473 env[1586]: time="2025-11-01T00:18:02.840449832Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 00:18:03.567737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3936658678.mount: Deactivated successfully. Nov 1 00:18:05.160969 env[1586]: time="2025-11-01T00:18:05.160922896Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:05.165672 env[1586]: time="2025-11-01T00:18:05.165638919Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:05.194042 env[1586]: time="2025-11-01T00:18:05.194001258Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:05.198999 env[1586]: time="2025-11-01T00:18:05.198948841Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:05.200561 env[1586]: time="2025-11-01T00:18:05.200524675Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Nov 1 00:18:05.202497 env[1586]: time="2025-11-01T00:18:05.202459508Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 00:18:06.727810 env[1586]: time="2025-11-01T00:18:06.727757889Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:06.733506 env[1586]: time="2025-11-01T00:18:06.733469789Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:06.737514 env[1586]: time="2025-11-01T00:18:06.737487456Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:06.740776 env[1586]: time="2025-11-01T00:18:06.740750284Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:06.741470 env[1586]: time="2025-11-01T00:18:06.741443442Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Nov 1 00:18:06.741959 env[1586]: time="2025-11-01T00:18:06.741934360Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 00:18:07.200174 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Nov 1 00:18:07.200358 systemd[1]: Stopped kubelet.service. Nov 1 00:18:07.222878 kernel: kauditd_printk_skb: 88 callbacks suppressed Nov 1 00:18:07.222968 kernel: audit: type=1130 audit(1761956287.200:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:07.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:07.201863 systemd[1]: Starting kubelet.service... Nov 1 00:18:07.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:07.240992 kernel: audit: type=1131 audit(1761956287.200:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:07.300098 systemd[1]: Started kubelet.service. Nov 1 00:18:07.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:07.322314 kernel: audit: type=1130 audit(1761956287.300:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:07.406724 kubelet[2153]: E1101 00:18:07.406689 2153 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:18:07.409057 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:18:07.409213 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:18:07.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:18:07.429301 kernel: audit: type=1131 audit(1761956287.409:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:18:08.318644 env[1586]: time="2025-11-01T00:18:08.318602353Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:08.345065 env[1586]: time="2025-11-01T00:18:08.345016907Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:08.348828 env[1586]: time="2025-11-01T00:18:08.348787015Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:08.353670 env[1586]: time="2025-11-01T00:18:08.353632879Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:08.354560 env[1586]: time="2025-11-01T00:18:08.354529676Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Nov 1 00:18:08.355248 env[1586]: time="2025-11-01T00:18:08.355213714Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 00:18:09.568609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1502894394.mount: Deactivated successfully. Nov 1 00:18:10.058885 env[1586]: time="2025-11-01T00:18:10.058829809Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:10.065893 env[1586]: time="2025-11-01T00:18:10.065819347Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:10.069932 env[1586]: time="2025-11-01T00:18:10.069903375Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:10.074534 env[1586]: time="2025-11-01T00:18:10.074501560Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:10.074999 env[1586]: time="2025-11-01T00:18:10.074966999Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Nov 1 00:18:10.076463 env[1586]: time="2025-11-01T00:18:10.076125275Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 00:18:11.091881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount41685110.mount: Deactivated successfully. Nov 1 00:18:12.189668 env[1586]: time="2025-11-01T00:18:12.189610331Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:12.196382 env[1586]: time="2025-11-01T00:18:12.196343632Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:12.200666 env[1586]: time="2025-11-01T00:18:12.200624859Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:12.203919 env[1586]: time="2025-11-01T00:18:12.203893129Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:12.204869 env[1586]: time="2025-11-01T00:18:12.204842647Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Nov 1 00:18:12.205506 env[1586]: time="2025-11-01T00:18:12.205483365Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:18:12.786864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount798196954.mount: Deactivated successfully. Nov 1 00:18:12.808161 env[1586]: time="2025-11-01T00:18:12.808111998Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:12.813188 env[1586]: time="2025-11-01T00:18:12.813163864Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:12.817481 env[1586]: time="2025-11-01T00:18:12.817445331Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:12.821472 env[1586]: time="2025-11-01T00:18:12.821445879Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:12.822104 env[1586]: time="2025-11-01T00:18:12.822081197Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 1 00:18:12.823245 env[1586]: time="2025-11-01T00:18:12.823224754Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 00:18:13.507096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1346290754.mount: Deactivated successfully. Nov 1 00:18:17.057966 env[1586]: time="2025-11-01T00:18:17.057728829Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:17.063080 env[1586]: time="2025-11-01T00:18:17.063042295Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:17.067816 env[1586]: time="2025-11-01T00:18:17.067770723Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:17.073144 env[1586]: time="2025-11-01T00:18:17.073114350Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:17.074133 env[1586]: time="2025-11-01T00:18:17.074106507Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Nov 1 00:18:17.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:17.450135 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Nov 1 00:18:17.450325 systemd[1]: Stopped kubelet.service. Nov 1 00:18:17.451900 systemd[1]: Starting kubelet.service... Nov 1 00:18:17.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:17.491666 kernel: audit: type=1130 audit(1761956297.450:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:17.491850 kernel: audit: type=1131 audit(1761956297.450:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:17.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:17.550865 systemd[1]: Started kubelet.service. Nov 1 00:18:17.573321 kernel: audit: type=1130 audit(1761956297.550:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:17.607045 kubelet[2172]: E1101 00:18:17.607006 2172 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:18:17.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:18:17.608660 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:18:17.608807 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:18:17.628326 kernel: audit: type=1131 audit(1761956297.608:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:18:22.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:22.669686 systemd[1]: Stopped kubelet.service. Nov 1 00:18:22.671819 systemd[1]: Starting kubelet.service... Nov 1 00:18:22.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:22.709392 kernel: audit: type=1130 audit(1761956302.669:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:22.713523 kernel: audit: type=1131 audit(1761956302.669:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:22.732400 systemd[1]: Reloading. Nov 1 00:18:22.813341 /usr/lib/systemd/system-generators/torcx-generator[2222]: time="2025-11-01T00:18:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:18:22.813370 /usr/lib/systemd/system-generators/torcx-generator[2222]: time="2025-11-01T00:18:22Z" level=info msg="torcx already run" Nov 1 00:18:22.888996 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:18:22.889164 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:18:22.907647 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:18:23.006979 systemd[1]: Started kubelet.service. Nov 1 00:18:23.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:23.027068 systemd[1]: Stopping kubelet.service... Nov 1 00:18:23.027428 kernel: audit: type=1130 audit(1761956303.006:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:23.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:23.028959 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:18:23.029196 systemd[1]: Stopped kubelet.service. Nov 1 00:18:23.031226 systemd[1]: Starting kubelet.service... Nov 1 00:18:23.053344 kernel: audit: type=1131 audit(1761956303.028:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:23.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:23.215107 systemd[1]: Started kubelet.service. Nov 1 00:18:23.235324 kernel: audit: type=1130 audit(1761956303.215:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:23.263344 kubelet[2305]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:18:23.263690 kubelet[2305]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:18:23.263739 kubelet[2305]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:18:23.263876 kubelet[2305]: I1101 00:18:23.263849 2305 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:18:23.643491 kubelet[2305]: I1101 00:18:23.643396 2305 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:18:23.643616 kubelet[2305]: I1101 00:18:23.643604 2305 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:18:23.643933 kubelet[2305]: I1101 00:18:23.643916 2305 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:18:23.668931 kubelet[2305]: E1101 00:18:23.668896 2305 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.20.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:18:23.670042 kubelet[2305]: I1101 00:18:23.670018 2305 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:18:23.676663 kubelet[2305]: E1101 00:18:23.676582 2305 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:18:23.676663 kubelet[2305]: I1101 00:18:23.676663 2305 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:18:23.679766 kubelet[2305]: I1101 00:18:23.679745 2305 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:18:23.680809 kubelet[2305]: I1101 00:18:23.680773 2305 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:18:23.680964 kubelet[2305]: I1101 00:18:23.680810 2305 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-c51a7922c9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:18:23.681050 kubelet[2305]: I1101 00:18:23.680972 2305 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:18:23.681050 kubelet[2305]: I1101 00:18:23.680981 2305 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:18:23.681111 kubelet[2305]: I1101 00:18:23.681096 2305 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:18:23.684379 kubelet[2305]: I1101 00:18:23.684359 2305 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:18:23.684433 kubelet[2305]: I1101 00:18:23.684381 2305 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:18:23.684433 kubelet[2305]: I1101 00:18:23.684399 2305 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:18:23.684433 kubelet[2305]: I1101 00:18:23.684409 2305 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:18:23.687327 kubelet[2305]: W1101 00:18:23.687262 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-c51a7922c9&limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Nov 1 00:18:23.687375 kubelet[2305]: E1101 00:18:23.687342 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-c51a7922c9&limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:18:23.694432 kubelet[2305]: W1101 00:18:23.694400 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Nov 1 00:18:23.694562 kubelet[2305]: E1101 00:18:23.694545 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:18:23.694727 kubelet[2305]: I1101 00:18:23.694711 2305 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:18:23.695269 kubelet[2305]: I1101 00:18:23.695253 2305 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:18:23.695430 kubelet[2305]: W1101 00:18:23.695418 2305 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:18:23.696248 kubelet[2305]: I1101 00:18:23.696223 2305 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:18:23.696363 kubelet[2305]: I1101 00:18:23.696257 2305 server.go:1287] "Started kubelet" Nov 1 00:18:23.700597 kubelet[2305]: I1101 00:18:23.700566 2305 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:18:23.701576 kubelet[2305]: I1101 00:18:23.701559 2305 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:18:23.708321 kubelet[2305]: I1101 00:18:23.708246 2305 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:18:23.708555 kubelet[2305]: I1101 00:18:23.708532 2305 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:18:23.709000 audit[2305]: AVC avc: denied { mac_admin } for pid=2305 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:18:23.717836 kubelet[2305]: I1101 00:18:23.715367 2305 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Nov 1 00:18:23.717836 kubelet[2305]: I1101 00:18:23.715423 2305 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Nov 1 00:18:23.717836 kubelet[2305]: I1101 00:18:23.715508 2305 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:18:23.709000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:18:23.740559 kernel: audit: type=1400 audit(1761956303.709:224): avc: denied { mac_admin } for pid=2305 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:18:23.740643 kernel: audit: type=1401 audit(1761956303.709:224): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:18:23.740690 kubelet[2305]: I1101 00:18:23.740654 2305 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:18:23.742435 kubelet[2305]: I1101 00:18:23.742406 2305 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:18:23.742700 kubelet[2305]: E1101 00:18:23.742670 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c51a7922c9\" not found" Nov 1 00:18:23.709000 audit[2305]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000c01110 a1=4000c5e420 a2=4000c010e0 a3=25 items=0 ppid=1 pid=2305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:23.763202 kubelet[2305]: E1101 00:18:23.763054 2305 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.42:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.42:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.8-n-c51a7922c9.1873b9ec47931db5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.8-n-c51a7922c9,UID:ci-3510.3.8-n-c51a7922c9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.8-n-c51a7922c9,},FirstTimestamp:2025-11-01 00:18:23.696240053 +0000 UTC m=+0.475082394,LastTimestamp:2025-11-01 00:18:23.696240053 +0000 UTC m=+0.475082394,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.8-n-c51a7922c9,}" Nov 1 00:18:23.764640 kubelet[2305]: I1101 00:18:23.764622 2305 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:18:23.764842 kubelet[2305]: I1101 00:18:23.764825 2305 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:18:23.766521 kubelet[2305]: I1101 00:18:23.766500 2305 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:18:23.771531 kernel: audit: type=1300 audit(1761956303.709:224): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000c01110 a1=4000c5e420 a2=4000c010e0 a3=25 items=0 ppid=1 pid=2305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:23.771629 kernel: audit: type=1327 audit(1761956303.709:224): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:18:23.709000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:18:23.772471 kubelet[2305]: I1101 00:18:23.772454 2305 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:18:23.772613 kubelet[2305]: I1101 00:18:23.772602 2305 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:18:23.773120 kubelet[2305]: E1101 00:18:23.773084 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-c51a7922c9?timeout=10s\": dial tcp 10.200.20.42:6443: connect: connection refused" interval="200ms" Nov 1 00:18:23.773322 kubelet[2305]: W1101 00:18:23.773274 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Nov 1 00:18:23.773427 kubelet[2305]: E1101 00:18:23.773410 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:18:23.780787 kubelet[2305]: E1101 00:18:23.780768 2305 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:18:23.715000 audit[2305]: AVC avc: denied { mac_admin } for pid=2305 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:18:23.818194 kernel: audit: type=1400 audit(1761956303.715:225): avc: denied { mac_admin } for pid=2305 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:18:23.715000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:18:23.715000 audit[2305]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40009c9320 a1=4000c5e438 a2=4000c011a0 a3=25 items=0 ppid=1 pid=2305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:23.715000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:18:23.742000 audit[2317]: NETFILTER_CFG table=mangle:29 family=2 entries=2 op=nft_register_chain pid=2317 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:23.742000 audit[2317]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffeefa0b80 a2=0 a3=1 items=0 ppid=2305 pid=2317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:23.742000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Nov 1 00:18:23.742000 audit[2318]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=2318 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:23.742000 audit[2318]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc35186c0 a2=0 a3=1 items=0 ppid=2305 pid=2318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:23.742000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Nov 1 00:18:23.742000 audit[2320]: NETFILTER_CFG table=filter:31 family=2 entries=2 op=nft_register_chain pid=2320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:23.742000 audit[2320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe91099f0 a2=0 a3=1 items=0 ppid=2305 pid=2320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:23.742000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:18:23.742000 audit[2322]: NETFILTER_CFG table=filter:32 family=2 entries=2 op=nft_register_chain pid=2322 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:23.742000 audit[2322]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe1c7a090 a2=0 a3=1 items=0 ppid=2305 pid=2322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:23.742000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:18:23.843450 kubelet[2305]: E1101 00:18:23.843425 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c51a7922c9\" not found" Nov 1 00:18:23.867000 audit[2329]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2329 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:23.867000 audit[2329]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff6dea6b0 a2=0 a3=1 items=0 ppid=2305 pid=2329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:23.867000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Nov 1 00:18:23.868320 kubelet[2305]: I1101 00:18:23.868279 2305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:18:23.868000 audit[2330]: NETFILTER_CFG table=mangle:34 family=10 entries=2 op=nft_register_chain pid=2330 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:23.868000 audit[2330]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdcfbf970 a2=0 a3=1 items=0 ppid=2305 pid=2330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:23.868000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Nov 1 00:18:23.869384 kubelet[2305]: I1101 00:18:23.869369 2305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:18:23.869460 kubelet[2305]: I1101 00:18:23.869451 2305 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:18:23.869539 kubelet[2305]: I1101 00:18:23.869528 2305 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:18:23.869600 kubelet[2305]: I1101 00:18:23.869589 2305 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:18:23.869698 kubelet[2305]: E1101 00:18:23.869683 2305 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:18:23.870691 kubelet[2305]: W1101 00:18:23.870671 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Nov 1 00:18:23.870000 audit[2332]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=2332 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:23.870000 audit[2332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffda0c7670 a2=0 a3=1 items=0 ppid=2305 pid=2332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:23.870000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Nov 1 00:18:23.871269 kubelet[2305]: E1101 00:18:23.871251 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:18:23.871000 audit[2331]: NETFILTER_CFG table=mangle:36 family=2 entries=1 op=nft_register_chain pid=2331 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:23.871000 audit[2331]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffc67acd0 a2=0 a3=1 items=0 ppid=2305 pid=2331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:23.871000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Nov 1 00:18:23.871000 audit[2333]: NETFILTER_CFG table=nat:37 family=10 entries=2 op=nft_register_chain pid=2333 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:23.871000 audit[2333]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffd5f19c90 a2=0 a3=1 items=0 ppid=2305 pid=2333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:23.871000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Nov 1 00:18:23.872000 audit[2335]: NETFILTER_CFG table=filter:38 family=10 entries=2 op=nft_register_chain pid=2335 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:23.872000 audit[2335]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffe6cad70 a2=0 a3=1 items=0 ppid=2305 pid=2335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:23.872000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Nov 1 00:18:23.873000 audit[2336]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2336 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:23.873000 audit[2336]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffd857cf0 a2=0 a3=1 items=0 ppid=2305 pid=2336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:23.873000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Nov 1 00:18:23.874000 audit[2337]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=2337 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:23.874000 audit[2337]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe470d890 a2=0 a3=1 items=0 ppid=2305 pid=2337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:23.874000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Nov 1 00:18:23.902765 kubelet[2305]: I1101 00:18:23.900501 2305 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:18:23.902765 kubelet[2305]: I1101 00:18:23.900523 2305 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:18:23.902765 kubelet[2305]: I1101 00:18:23.900550 2305 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:18:23.907187 kubelet[2305]: I1101 00:18:23.907159 2305 policy_none.go:49] "None policy: Start" Nov 1 00:18:23.907243 kubelet[2305]: I1101 00:18:23.907196 2305 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:18:23.907243 kubelet[2305]: I1101 00:18:23.907212 2305 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:18:23.913956 kubelet[2305]: I1101 00:18:23.913935 2305 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:18:23.913000 audit[2305]: AVC avc: denied { mac_admin } for pid=2305 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:18:23.913000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:18:23.913000 audit[2305]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000f2f680 a1=4000f13f68 a2=4000f2f650 a3=25 items=0 ppid=1 pid=2305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:23.913000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:18:23.914304 kubelet[2305]: I1101 00:18:23.914269 2305 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Nov 1 00:18:23.914467 kubelet[2305]: I1101 00:18:23.914454 2305 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:18:23.914557 kubelet[2305]: I1101 00:18:23.914528 2305 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:18:23.915948 kubelet[2305]: I1101 00:18:23.915931 2305 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:18:23.916662 kubelet[2305]: E1101 00:18:23.916636 2305 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:18:23.916742 kubelet[2305]: E1101 00:18:23.916673 2305 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.8-n-c51a7922c9\" not found" Nov 1 00:18:23.974044 kubelet[2305]: E1101 00:18:23.974008 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-c51a7922c9?timeout=10s\": dial tcp 10.200.20.42:6443: connect: connection refused" interval="400ms" Nov 1 00:18:23.977739 kubelet[2305]: E1101 00:18:23.977703 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-c51a7922c9\" not found" node="ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:23.979382 kubelet[2305]: E1101 00:18:23.979365 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-c51a7922c9\" not found" node="ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:23.979555 kubelet[2305]: E1101 00:18:23.979529 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-c51a7922c9\" not found" node="ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:24.016177 kubelet[2305]: I1101 00:18:24.016155 2305 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:24.016645 kubelet[2305]: E1101 00:18:24.016624 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.42:6443/api/v1/nodes\": dial tcp 10.200.20.42:6443: connect: connection refused" node="ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:24.073998 kubelet[2305]: I1101 00:18:24.073970 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b54b738976d9f2d74a49404f333944f-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-c51a7922c9\" (UID: \"4b54b738976d9f2d74a49404f333944f\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:24.074156 kubelet[2305]: I1101 00:18:24.074139 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b54b738976d9f2d74a49404f333944f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-c51a7922c9\" (UID: \"4b54b738976d9f2d74a49404f333944f\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:24.074231 kubelet[2305]: I1101 00:18:24.074219 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/16ae8945348385f766cc326a3109f53e-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-c51a7922c9\" (UID: \"16ae8945348385f766cc326a3109f53e\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:24.074330 kubelet[2305]: I1101 00:18:24.074314 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8f4542e32ee8ef8a93b262d1797c0d7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-c51a7922c9\" (UID: \"a8f4542e32ee8ef8a93b262d1797c0d7\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:24.074417 kubelet[2305]: I1101 00:18:24.074404 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8f4542e32ee8ef8a93b262d1797c0d7-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-c51a7922c9\" (UID: \"a8f4542e32ee8ef8a93b262d1797c0d7\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:24.074500 kubelet[2305]: I1101 00:18:24.074487 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4b54b738976d9f2d74a49404f333944f-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-c51a7922c9\" (UID: \"4b54b738976d9f2d74a49404f333944f\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:24.074579 kubelet[2305]: I1101 00:18:24.074567 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b54b738976d9f2d74a49404f333944f-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-c51a7922c9\" (UID: \"4b54b738976d9f2d74a49404f333944f\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:24.074647 kubelet[2305]: I1101 00:18:24.074635 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b54b738976d9f2d74a49404f333944f-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-c51a7922c9\" (UID: \"4b54b738976d9f2d74a49404f333944f\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:24.074706 kubelet[2305]: I1101 00:18:24.074695 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8f4542e32ee8ef8a93b262d1797c0d7-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-c51a7922c9\" (UID: \"a8f4542e32ee8ef8a93b262d1797c0d7\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:24.218481 kubelet[2305]: I1101 00:18:24.218456 2305 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:24.219020 kubelet[2305]: E1101 00:18:24.218997 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.42:6443/api/v1/nodes\": dial tcp 10.200.20.42:6443: connect: connection refused" node="ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:24.279136 env[1586]: time="2025-11-01T00:18:24.279093943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-c51a7922c9,Uid:4b54b738976d9f2d74a49404f333944f,Namespace:kube-system,Attempt:0,}" Nov 1 00:18:24.280494 env[1586]: time="2025-11-01T00:18:24.280467100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-c51a7922c9,Uid:a8f4542e32ee8ef8a93b262d1797c0d7,Namespace:kube-system,Attempt:0,}" Nov 1 00:18:24.280857 env[1586]: time="2025-11-01T00:18:24.280826979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-c51a7922c9,Uid:16ae8945348385f766cc326a3109f53e,Namespace:kube-system,Attempt:0,}" Nov 1 00:18:24.374562 kubelet[2305]: E1101 00:18:24.374525 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-c51a7922c9?timeout=10s\": dial tcp 10.200.20.42:6443: connect: connection refused" interval="800ms" Nov 1 00:18:24.621198 kubelet[2305]: I1101 00:18:24.620860 2305 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:24.621477 kubelet[2305]: E1101 00:18:24.621445 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.42:6443/api/v1/nodes\": dial tcp 10.200.20.42:6443: connect: connection refused" node="ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:24.691663 kubelet[2305]: W1101 00:18:24.691568 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Nov 1 00:18:24.691663 kubelet[2305]: E1101 00:18:24.691628 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.20.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:18:24.889107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2771475360.mount: Deactivated successfully. Nov 1 00:18:24.905578 env[1586]: time="2025-11-01T00:18:24.905544476Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:24.916644 env[1586]: time="2025-11-01T00:18:24.916600652Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:24.922912 env[1586]: time="2025-11-01T00:18:24.922880398Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:24.925650 env[1586]: time="2025-11-01T00:18:24.925615353Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:24.931381 env[1586]: time="2025-11-01T00:18:24.931340220Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:24.937106 env[1586]: time="2025-11-01T00:18:24.937080248Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:24.940993 env[1586]: time="2025-11-01T00:18:24.940959200Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:24.946663 env[1586]: time="2025-11-01T00:18:24.946626307Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:24.950309 env[1586]: time="2025-11-01T00:18:24.950264620Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:24.956923 env[1586]: time="2025-11-01T00:18:24.956887365Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:24.959309 env[1586]: time="2025-11-01T00:18:24.959260280Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:24.976245 env[1586]: time="2025-11-01T00:18:24.976214124Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:25.002436 env[1586]: time="2025-11-01T00:18:25.001736029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:18:25.002436 env[1586]: time="2025-11-01T00:18:25.001771549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:18:25.002436 env[1586]: time="2025-11-01T00:18:25.001781309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:25.002436 env[1586]: time="2025-11-01T00:18:25.001880789Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d392676c23299ba0f294536dd64cf31cef13bdfe9621df4c4a3df9c8755ad94d pid=2345 runtime=io.containerd.runc.v2 Nov 1 00:18:25.009577 env[1586]: time="2025-11-01T00:18:25.009513373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:18:25.009577 env[1586]: time="2025-11-01T00:18:25.009549893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:18:25.009920 env[1586]: time="2025-11-01T00:18:25.009559853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:25.010352 env[1586]: time="2025-11-01T00:18:25.010300571Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8046cccd8d4a804c76cc2874d9aafdf78098bdf12ba6fca46c4944213fbf553 pid=2365 runtime=io.containerd.runc.v2 Nov 1 00:18:25.051508 env[1586]: time="2025-11-01T00:18:25.051437045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:18:25.051688 env[1586]: time="2025-11-01T00:18:25.051665284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:18:25.051765 env[1586]: time="2025-11-01T00:18:25.051745404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:25.053499 env[1586]: time="2025-11-01T00:18:25.053454801Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ae5b72eca72f4b2a3d941f624e88b4a3991e248a8834850c6f05c1ce7bd8840 pid=2419 runtime=io.containerd.runc.v2 Nov 1 00:18:25.068956 env[1586]: time="2025-11-01T00:18:25.068915888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.8-n-c51a7922c9,Uid:4b54b738976d9f2d74a49404f333944f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d392676c23299ba0f294536dd64cf31cef13bdfe9621df4c4a3df9c8755ad94d\"" Nov 1 00:18:25.072411 env[1586]: time="2025-11-01T00:18:25.072381761Z" level=info msg="CreateContainer within sandbox \"d392676c23299ba0f294536dd64cf31cef13bdfe9621df4c4a3df9c8755ad94d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:18:25.083194 env[1586]: time="2025-11-01T00:18:25.083143458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.8-n-c51a7922c9,Uid:16ae8945348385f766cc326a3109f53e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8046cccd8d4a804c76cc2874d9aafdf78098bdf12ba6fca46c4944213fbf553\"" Nov 1 00:18:25.091944 env[1586]: time="2025-11-01T00:18:25.091901600Z" level=info msg="CreateContainer within sandbox \"d8046cccd8d4a804c76cc2874d9aafdf78098bdf12ba6fca46c4944213fbf553\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:18:25.110136 env[1586]: time="2025-11-01T00:18:25.110096082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.8-n-c51a7922c9,Uid:a8f4542e32ee8ef8a93b262d1797c0d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ae5b72eca72f4b2a3d941f624e88b4a3991e248a8834850c6f05c1ce7bd8840\"" Nov 1 00:18:25.112245 env[1586]: time="2025-11-01T00:18:25.112216077Z" level=info msg="CreateContainer within sandbox \"8ae5b72eca72f4b2a3d941f624e88b4a3991e248a8834850c6f05c1ce7bd8840\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:18:25.119345 env[1586]: time="2025-11-01T00:18:25.119314302Z" level=info msg="CreateContainer within sandbox \"d392676c23299ba0f294536dd64cf31cef13bdfe9621df4c4a3df9c8755ad94d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8f7bb173d4feb3609eae2f7f833fa8bc43b0b2692da2a26e26fadd8a8aec1d3f\"" Nov 1 00:18:25.120004 env[1586]: time="2025-11-01T00:18:25.119972901Z" level=info msg="StartContainer for \"8f7bb173d4feb3609eae2f7f833fa8bc43b0b2692da2a26e26fadd8a8aec1d3f\"" Nov 1 00:18:25.149126 env[1586]: time="2025-11-01T00:18:25.148396001Z" level=info msg="CreateContainer within sandbox \"d8046cccd8d4a804c76cc2874d9aafdf78098bdf12ba6fca46c4944213fbf553\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"041eac535e898b68c7ba49836e4809127f43762d2ab259c7d16604a843d4a51d\"" Nov 1 00:18:25.149582 kubelet[2305]: W1101 00:18:25.149517 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Nov 1 00:18:25.149643 kubelet[2305]: E1101 00:18:25.149594 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.20.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:18:25.149989 env[1586]: time="2025-11-01T00:18:25.149955158Z" level=info msg="StartContainer for \"041eac535e898b68c7ba49836e4809127f43762d2ab259c7d16604a843d4a51d\"" Nov 1 00:18:25.163464 kubelet[2305]: W1101 00:18:25.163399 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-c51a7922c9&limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Nov 1 00:18:25.163591 kubelet[2305]: E1101 00:18:25.163471 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.20.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.8-n-c51a7922c9&limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:18:25.182329 kubelet[2305]: E1101 00:18:25.175465 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.8-n-c51a7922c9?timeout=10s\": dial tcp 10.200.20.42:6443: connect: connection refused" interval="1.6s" Nov 1 00:18:25.182742 env[1586]: time="2025-11-01T00:18:25.182703489Z" level=info msg="CreateContainer within sandbox \"8ae5b72eca72f4b2a3d941f624e88b4a3991e248a8834850c6f05c1ce7bd8840\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"654ec97314273581c0727c7c99a403837841777011e3df9f48361f1e8444fd7e\"" Nov 1 00:18:25.183334 env[1586]: time="2025-11-01T00:18:25.183306968Z" level=info msg="StartContainer for \"654ec97314273581c0727c7c99a403837841777011e3df9f48361f1e8444fd7e\"" Nov 1 00:18:25.197837 env[1586]: time="2025-11-01T00:18:25.197791498Z" level=info msg="StartContainer for \"8f7bb173d4feb3609eae2f7f833fa8bc43b0b2692da2a26e26fadd8a8aec1d3f\" returns successfully" Nov 1 00:18:25.267313 env[1586]: time="2025-11-01T00:18:25.267245192Z" level=info msg="StartContainer for \"654ec97314273581c0727c7c99a403837841777011e3df9f48361f1e8444fd7e\" returns successfully" Nov 1 00:18:25.276813 env[1586]: time="2025-11-01T00:18:25.276769692Z" level=info msg="StartContainer for \"041eac535e898b68c7ba49836e4809127f43762d2ab259c7d16604a843d4a51d\" returns successfully" Nov 1 00:18:25.376439 kubelet[2305]: W1101 00:18:25.376339 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.42:6443: connect: connection refused Nov 1 00:18:25.376439 kubelet[2305]: E1101 00:18:25.376404 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.20.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.20.42:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:18:25.423316 kubelet[2305]: I1101 00:18:25.423218 2305 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:25.423762 kubelet[2305]: E1101 00:18:25.423570 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.20.42:6443/api/v1/nodes\": dial tcp 10.200.20.42:6443: connect: connection refused" node="ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:25.877024 kubelet[2305]: E1101 00:18:25.876984 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-c51a7922c9\" not found" node="ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:25.882593 kubelet[2305]: E1101 00:18:25.882555 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-c51a7922c9\" not found" node="ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:25.892731 kubelet[2305]: E1101 00:18:25.892684 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-c51a7922c9\" not found" node="ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:26.892362 kubelet[2305]: E1101 00:18:26.892327 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-c51a7922c9\" not found" node="ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:26.893227 kubelet[2305]: E1101 00:18:26.893199 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-3510.3.8-n-c51a7922c9\" not found" node="ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:27.025229 kubelet[2305]: I1101 00:18:27.024953 2305 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:27.562482 kubelet[2305]: E1101 00:18:27.562449 2305 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.8-n-c51a7922c9\" not found" node="ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:27.607481 kubelet[2305]: I1101 00:18:27.607450 2305 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:27.672912 kubelet[2305]: I1101 00:18:27.672877 2305 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:27.688761 kubelet[2305]: I1101 00:18:27.688735 2305 apiserver.go:52] "Watching apiserver" Nov 1 00:18:27.690184 kubelet[2305]: E1101 00:18:27.690158 2305 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-c51a7922c9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:27.690321 kubelet[2305]: I1101 00:18:27.690305 2305 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:27.695636 kubelet[2305]: E1101 00:18:27.695590 2305 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-3510.3.8-n-c51a7922c9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:27.695636 kubelet[2305]: I1101 00:18:27.695629 2305 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:27.700004 kubelet[2305]: E1101 00:18:27.699981 2305 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-c51a7922c9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:27.772778 kubelet[2305]: I1101 00:18:27.772746 2305 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:18:29.782686 systemd[1]: Reloading. Nov 1 00:18:29.807691 kubelet[2305]: I1101 00:18:29.807664 2305 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:29.818980 kubelet[2305]: W1101 00:18:29.818943 2305 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:18:29.855829 /usr/lib/systemd/system-generators/torcx-generator[2602]: time="2025-11-01T00:18:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:18:29.856173 /usr/lib/systemd/system-generators/torcx-generator[2602]: time="2025-11-01T00:18:29Z" level=info msg="torcx already run" Nov 1 00:18:29.939024 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:18:29.939043 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:18:29.955759 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:18:30.041694 systemd[1]: Stopping kubelet.service... Nov 1 00:18:30.063794 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:18:30.064086 systemd[1]: Stopped kubelet.service. Nov 1 00:18:30.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:30.070166 kernel: kauditd_printk_skb: 43 callbacks suppressed Nov 1 00:18:30.070260 kernel: audit: type=1131 audit(1761956310.063:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:30.069977 systemd[1]: Starting kubelet.service... Nov 1 00:18:30.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:30.173191 systemd[1]: Started kubelet.service. Nov 1 00:18:30.196123 kernel: audit: type=1130 audit(1761956310.172:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:30.234997 kubelet[2677]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:18:30.235442 kubelet[2677]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:18:30.235495 kubelet[2677]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:18:30.235933 kubelet[2677]: I1101 00:18:30.235877 2677 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:18:30.243109 kubelet[2677]: I1101 00:18:30.243080 2677 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:18:30.243239 kubelet[2677]: I1101 00:18:30.243228 2677 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:18:30.243599 kubelet[2677]: I1101 00:18:30.243578 2677 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:18:30.245105 kubelet[2677]: I1101 00:18:30.245084 2677 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 00:18:30.251137 kubelet[2677]: I1101 00:18:30.250934 2677 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:18:30.255883 kubelet[2677]: E1101 00:18:30.255840 2677 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:18:30.255883 kubelet[2677]: I1101 00:18:30.255871 2677 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:18:30.259703 kubelet[2677]: I1101 00:18:30.259676 2677 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:18:30.261367 kubelet[2677]: I1101 00:18:30.260120 2677 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:18:30.261367 kubelet[2677]: I1101 00:18:30.260152 2677 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.8-n-c51a7922c9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:18:30.261367 kubelet[2677]: I1101 00:18:30.260489 2677 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:18:30.261367 kubelet[2677]: I1101 00:18:30.260499 2677 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:18:30.261712 kubelet[2677]: I1101 00:18:30.260545 2677 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:18:30.261712 kubelet[2677]: I1101 00:18:30.260642 2677 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:18:30.261712 kubelet[2677]: I1101 00:18:30.260651 2677 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:18:30.261712 kubelet[2677]: I1101 00:18:30.260670 2677 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:18:30.261712 kubelet[2677]: I1101 00:18:30.260847 2677 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:18:30.271467 kubelet[2677]: I1101 00:18:30.271431 2677 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:18:30.272115 kubelet[2677]: I1101 00:18:30.272101 2677 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:18:30.274220 kubelet[2677]: I1101 00:18:30.274190 2677 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:18:30.274362 kubelet[2677]: I1101 00:18:30.274350 2677 server.go:1287] "Started kubelet" Nov 1 00:18:30.275000 audit[2677]: AVC avc: denied { mac_admin } for pid=2677 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:18:30.276962 kubelet[2677]: I1101 00:18:30.276931 2677 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Nov 1 00:18:30.277074 kubelet[2677]: I1101 00:18:30.277060 2677 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Nov 1 00:18:30.277158 kubelet[2677]: I1101 00:18:30.277148 2677 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:18:30.281319 kubelet[2677]: I1101 00:18:30.281278 2677 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:18:30.585103 kernel: audit: type=1400 audit(1761956310.275:241): avc: denied { mac_admin } for pid=2677 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:18:30.585185 kernel: audit: type=1401 audit(1761956310.275:241): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:18:30.585204 kernel: audit: type=1300 audit(1761956310.275:241): arch=c00000b7 syscall=5 success=no exit=-22 a0=400055ec00 a1=40000499f8 a2=400055ebd0 a3=25 items=0 ppid=1 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:30.585220 kernel: audit: type=1327 audit(1761956310.275:241): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:18:30.585235 kernel: audit: type=1400 audit(1761956310.275:242): avc: denied { mac_admin } for pid=2677 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:18:30.585256 kernel: audit: type=1401 audit(1761956310.275:242): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:18:30.585272 kernel: audit: type=1300 audit(1761956310.275:242): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000206d20 a1=4000049a10 a2=400055ec90 a3=25 items=0 ppid=1 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:30.585319 kernel: audit: type=1327 audit(1761956310.275:242): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:18:30.275000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:18:30.275000 audit[2677]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400055ec00 a1=40000499f8 a2=400055ebd0 a3=25 items=0 ppid=1 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:30.275000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:18:30.275000 audit[2677]: AVC avc: denied { mac_admin } for pid=2677 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:18:30.275000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:18:30.275000 audit[2677]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000206d20 a1=4000049a10 a2=400055ec90 a3=25 items=0 ppid=1 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:30.275000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:18:30.494000 audit[2677]: AVC avc: denied { mac_admin } for pid=2677 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:18:30.494000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:18:30.494000 audit[2677]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4001097290 a1=400109c378 a2=4001097260 a3=25 items=0 ppid=1 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:30.494000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:18:30.585712 kubelet[2677]: I1101 00:18:30.282095 2677 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:18:30.585712 kubelet[2677]: I1101 00:18:30.282920 2677 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:18:30.585712 kubelet[2677]: I1101 00:18:30.300323 2677 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:18:30.585712 kubelet[2677]: E1101 00:18:30.301899 2677 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-3510.3.8-n-c51a7922c9\" not found" Nov 1 00:18:30.585712 kubelet[2677]: I1101 00:18:30.302157 2677 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:18:30.585712 kubelet[2677]: I1101 00:18:30.302405 2677 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:18:30.585712 kubelet[2677]: I1101 00:18:30.302548 2677 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:18:30.585712 kubelet[2677]: I1101 00:18:30.310409 2677 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:18:30.585712 kubelet[2677]: I1101 00:18:30.310507 2677 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:18:30.585712 kubelet[2677]: I1101 00:18:30.340788 2677 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:18:30.585712 kubelet[2677]: I1101 00:18:30.343197 2677 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:18:30.585712 kubelet[2677]: I1101 00:18:30.351420 2677 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:18:30.585712 kubelet[2677]: I1101 00:18:30.351441 2677 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:18:30.585712 kubelet[2677]: I1101 00:18:30.351468 2677 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:18:30.585712 kubelet[2677]: I1101 00:18:30.351475 2677 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:18:30.585712 kubelet[2677]: E1101 00:18:30.351516 2677 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:18:30.586091 kubelet[2677]: E1101 00:18:30.451891 2677 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:18:30.586091 kubelet[2677]: I1101 00:18:30.494736 2677 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:18:30.586091 kubelet[2677]: I1101 00:18:30.494748 2677 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:18:30.586091 kubelet[2677]: I1101 00:18:30.494767 2677 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:18:30.586091 kubelet[2677]: I1101 00:18:30.494917 2677 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:18:30.586091 kubelet[2677]: I1101 00:18:30.494927 2677 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:18:30.586091 kubelet[2677]: I1101 00:18:30.494945 2677 policy_none.go:49] "None policy: Start" Nov 1 00:18:30.586091 kubelet[2677]: I1101 00:18:30.494954 2677 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:18:30.586091 kubelet[2677]: I1101 00:18:30.494963 2677 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:18:30.586091 kubelet[2677]: I1101 00:18:30.495050 2677 state_mem.go:75] "Updated machine memory state" Nov 1 00:18:30.586091 kubelet[2677]: I1101 00:18:30.496117 2677 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:18:30.586091 kubelet[2677]: I1101 00:18:30.497408 2677 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Nov 1 00:18:30.587921 kubelet[2677]: I1101 00:18:30.587896 2677 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:18:30.634091 kubelet[2677]: I1101 00:18:30.634057 2677 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:18:30.634235 kubelet[2677]: I1101 00:18:30.634081 2677 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:18:30.635104 kubelet[2677]: I1101 00:18:30.634431 2677 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:18:30.637023 kubelet[2677]: E1101 00:18:30.637005 2677 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:18:30.652572 kubelet[2677]: I1101 00:18:30.652549 2677 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:30.653681 kubelet[2677]: I1101 00:18:30.653476 2677 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:30.653963 kubelet[2677]: I1101 00:18:30.653934 2677 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:30.661773 kubelet[2677]: W1101 00:18:30.661729 2677 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:18:30.667653 kubelet[2677]: W1101 00:18:30.667629 2677 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:18:30.667755 kubelet[2677]: W1101 00:18:30.667742 2677 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:18:30.667857 kubelet[2677]: E1101 00:18:30.667843 2677 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-3510.3.8-n-c51a7922c9\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:30.720171 kubelet[2677]: I1101 00:18:30.720143 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8f4542e32ee8ef8a93b262d1797c0d7-ca-certs\") pod \"kube-apiserver-ci-3510.3.8-n-c51a7922c9\" (UID: \"a8f4542e32ee8ef8a93b262d1797c0d7\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:30.720345 kubelet[2677]: I1101 00:18:30.720328 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8f4542e32ee8ef8a93b262d1797c0d7-k8s-certs\") pod \"kube-apiserver-ci-3510.3.8-n-c51a7922c9\" (UID: \"a8f4542e32ee8ef8a93b262d1797c0d7\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:30.720442 kubelet[2677]: I1101 00:18:30.720427 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b54b738976d9f2d74a49404f333944f-ca-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-c51a7922c9\" (UID: \"4b54b738976d9f2d74a49404f333944f\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:30.720538 kubelet[2677]: I1101 00:18:30.720525 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b54b738976d9f2d74a49404f333944f-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.8-n-c51a7922c9\" (UID: \"4b54b738976d9f2d74a49404f333944f\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:30.720627 kubelet[2677]: I1101 00:18:30.720612 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b54b738976d9f2d74a49404f333944f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.8-n-c51a7922c9\" (UID: \"4b54b738976d9f2d74a49404f333944f\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:30.720712 kubelet[2677]: I1101 00:18:30.720699 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/16ae8945348385f766cc326a3109f53e-kubeconfig\") pod \"kube-scheduler-ci-3510.3.8-n-c51a7922c9\" (UID: \"16ae8945348385f766cc326a3109f53e\") " pod="kube-system/kube-scheduler-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:30.720795 kubelet[2677]: I1101 00:18:30.720783 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8f4542e32ee8ef8a93b262d1797c0d7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.8-n-c51a7922c9\" (UID: \"a8f4542e32ee8ef8a93b262d1797c0d7\") " pod="kube-system/kube-apiserver-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:30.720871 kubelet[2677]: I1101 00:18:30.720859 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4b54b738976d9f2d74a49404f333944f-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.8-n-c51a7922c9\" (UID: \"4b54b738976d9f2d74a49404f333944f\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:30.720947 kubelet[2677]: I1101 00:18:30.720935 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b54b738976d9f2d74a49404f333944f-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.8-n-c51a7922c9\" (UID: \"4b54b738976d9f2d74a49404f333944f\") " pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:30.741752 kubelet[2677]: I1101 00:18:30.741733 2677 kubelet_node_status.go:75] "Attempting to register node" node="ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:30.757140 kubelet[2677]: I1101 00:18:30.757115 2677 kubelet_node_status.go:124] "Node was previously registered" node="ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:30.757239 kubelet[2677]: I1101 00:18:30.757187 2677 kubelet_node_status.go:78] "Successfully registered node" node="ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:31.266113 kubelet[2677]: I1101 00:18:31.266074 2677 apiserver.go:52] "Watching apiserver" Nov 1 00:18:31.302682 kubelet[2677]: I1101 00:18:31.302632 2677 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:18:31.363886 kubelet[2677]: I1101 00:18:31.363783 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.8-n-c51a7922c9" podStartSLOduration=1.3637645219999999 podStartE2EDuration="1.363764522s" podCreationTimestamp="2025-11-01 00:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:18:31.351430504 +0000 UTC m=+1.161747538" watchObservedRunningTime="2025-11-01 00:18:31.363764522 +0000 UTC m=+1.174081556" Nov 1 00:18:31.378550 kubelet[2677]: I1101 00:18:31.378483 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.8-n-c51a7922c9" podStartSLOduration=1.378467855 podStartE2EDuration="1.378467855s" podCreationTimestamp="2025-11-01 00:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:18:31.364886439 +0000 UTC m=+1.175203513" watchObservedRunningTime="2025-11-01 00:18:31.378467855 +0000 UTC m=+1.188784929" Nov 1 00:18:31.396455 kubelet[2677]: I1101 00:18:31.396393 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.8-n-c51a7922c9" podStartSLOduration=2.396378982 podStartE2EDuration="2.396378982s" podCreationTimestamp="2025-11-01 00:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:18:31.378717774 +0000 UTC m=+1.189034848" watchObservedRunningTime="2025-11-01 00:18:31.396378982 +0000 UTC m=+1.206696056" Nov 1 00:18:31.455537 kubelet[2677]: I1101 00:18:31.455505 2677 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:31.464259 kubelet[2677]: W1101 00:18:31.464223 2677 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:18:31.464427 kubelet[2677]: E1101 00:18:31.464402 2677 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-3510.3.8-n-c51a7922c9\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.8-n-c51a7922c9" Nov 1 00:18:35.186262 kubelet[2677]: I1101 00:18:35.186236 2677 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:18:35.187054 env[1586]: time="2025-11-01T00:18:35.186956942Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:18:35.187434 kubelet[2677]: I1101 00:18:35.187411 2677 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:18:35.748775 kubelet[2677]: I1101 00:18:35.748733 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4ll2\" (UniqueName: \"kubernetes.io/projected/7353da9b-d300-48ae-9d41-d1f8f6db681b-kube-api-access-c4ll2\") pod \"kube-proxy-l9b4s\" (UID: \"7353da9b-d300-48ae-9d41-d1f8f6db681b\") " pod="kube-system/kube-proxy-l9b4s" Nov 1 00:18:35.748775 kubelet[2677]: I1101 00:18:35.748780 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7353da9b-d300-48ae-9d41-d1f8f6db681b-xtables-lock\") pod \"kube-proxy-l9b4s\" (UID: \"7353da9b-d300-48ae-9d41-d1f8f6db681b\") " pod="kube-system/kube-proxy-l9b4s" Nov 1 00:18:35.748930 kubelet[2677]: I1101 00:18:35.748801 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7353da9b-d300-48ae-9d41-d1f8f6db681b-lib-modules\") pod \"kube-proxy-l9b4s\" (UID: \"7353da9b-d300-48ae-9d41-d1f8f6db681b\") " pod="kube-system/kube-proxy-l9b4s" Nov 1 00:18:35.748930 kubelet[2677]: I1101 00:18:35.748817 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7353da9b-d300-48ae-9d41-d1f8f6db681b-kube-proxy\") pod \"kube-proxy-l9b4s\" (UID: \"7353da9b-d300-48ae-9d41-d1f8f6db681b\") " pod="kube-system/kube-proxy-l9b4s" Nov 1 00:18:35.856541 kubelet[2677]: I1101 00:18:35.856509 2677 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 00:18:35.991090 env[1586]: time="2025-11-01T00:18:35.991055172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l9b4s,Uid:7353da9b-d300-48ae-9d41-d1f8f6db681b,Namespace:kube-system,Attempt:0,}" Nov 1 00:18:36.025597 env[1586]: time="2025-11-01T00:18:36.024995917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:18:36.025597 env[1586]: time="2025-11-01T00:18:36.025062757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:18:36.025597 env[1586]: time="2025-11-01T00:18:36.025072877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:36.025932 env[1586]: time="2025-11-01T00:18:36.025826196Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fcef184c03ac7d6035ff72fe5b47210387d25abb7609a714032cf9a86d736446 pid=2729 runtime=io.containerd.runc.v2 Nov 1 00:18:36.074908 env[1586]: time="2025-11-01T00:18:36.074871597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l9b4s,Uid:7353da9b-d300-48ae-9d41-d1f8f6db681b,Namespace:kube-system,Attempt:0,} returns sandbox id \"fcef184c03ac7d6035ff72fe5b47210387d25abb7609a714032cf9a86d736446\"" Nov 1 00:18:36.082265 env[1586]: time="2025-11-01T00:18:36.082231385Z" level=info msg="CreateContainer within sandbox \"fcef184c03ac7d6035ff72fe5b47210387d25abb7609a714032cf9a86d736446\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:18:36.140824 env[1586]: time="2025-11-01T00:18:36.140744970Z" level=info msg="CreateContainer within sandbox \"fcef184c03ac7d6035ff72fe5b47210387d25abb7609a714032cf9a86d736446\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"14b9465f6f400339dc6f9055aa3d0b15256ad000c318c19479f93dd0c062cf7c\"" Nov 1 00:18:36.142666 env[1586]: time="2025-11-01T00:18:36.142429807Z" level=info msg="StartContainer for \"14b9465f6f400339dc6f9055aa3d0b15256ad000c318c19479f93dd0c062cf7c\"" Nov 1 00:18:36.228597 env[1586]: time="2025-11-01T00:18:36.228556148Z" level=info msg="StartContainer for \"14b9465f6f400339dc6f9055aa3d0b15256ad000c318c19479f93dd0c062cf7c\" returns successfully" Nov 1 00:18:36.251644 kubelet[2677]: I1101 00:18:36.251599 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/112a9854-3484-4215-96cf-ea834060b019-var-lib-calico\") pod \"tigera-operator-7dcd859c48-vhkcl\" (UID: \"112a9854-3484-4215-96cf-ea834060b019\") " pod="tigera-operator/tigera-operator-7dcd859c48-vhkcl" Nov 1 00:18:36.252020 kubelet[2677]: I1101 00:18:36.252004 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nfdl\" (UniqueName: \"kubernetes.io/projected/112a9854-3484-4215-96cf-ea834060b019-kube-api-access-9nfdl\") pod \"tigera-operator-7dcd859c48-vhkcl\" (UID: \"112a9854-3484-4215-96cf-ea834060b019\") " pod="tigera-operator/tigera-operator-7dcd859c48-vhkcl" Nov 1 00:18:36.390000 audit[2833]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2833 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.396587 kernel: kauditd_printk_skb: 4 callbacks suppressed Nov 1 00:18:36.396682 kernel: audit: type=1325 audit(1761956316.390:244): table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2833 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.390000 audit[2833]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdce9be90 a2=0 a3=1 items=0 ppid=2782 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.436936 kernel: audit: type=1300 audit(1761956316.390:244): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdce9be90 a2=0 a3=1 items=0 ppid=2782 pid=2833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.390000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:18:36.451438 kernel: audit: type=1327 audit(1761956316.390:244): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:18:36.390000 audit[2831]: NETFILTER_CFG table=mangle:42 family=2 entries=1 op=nft_register_chain pid=2831 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.465322 kernel: audit: type=1325 audit(1761956316.390:245): table=mangle:42 family=2 entries=1 op=nft_register_chain pid=2831 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.390000 audit[2831]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe823a830 a2=0 a3=1 items=0 ppid=2782 pid=2831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.491592 kernel: audit: type=1300 audit(1761956316.390:245): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe823a830 a2=0 a3=1 items=0 ppid=2782 pid=2831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.390000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:18:36.505615 kernel: audit: type=1327 audit(1761956316.390:245): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:18:36.395000 audit[2835]: NETFILTER_CFG table=nat:43 family=10 entries=1 op=nft_register_chain pid=2835 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.519676 kernel: audit: type=1325 audit(1761956316.395:246): table=nat:43 family=10 entries=1 op=nft_register_chain pid=2835 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.520453 env[1586]: time="2025-11-01T00:18:36.520416596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-vhkcl,Uid:112a9854-3484-4215-96cf-ea834060b019,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:18:36.395000 audit[2835]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd97b70f0 a2=0 a3=1 items=0 ppid=2782 pid=2835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.547769 kernel: audit: type=1300 audit(1761956316.395:246): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd97b70f0 a2=0 a3=1 items=0 ppid=2782 pid=2835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.395000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 00:18:36.561773 kernel: audit: type=1327 audit(1761956316.395:246): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 00:18:36.395000 audit[2836]: NETFILTER_CFG table=nat:44 family=2 entries=1 op=nft_register_chain pid=2836 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.576950 kernel: audit: type=1325 audit(1761956316.395:247): table=nat:44 family=2 entries=1 op=nft_register_chain pid=2836 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.395000 audit[2836]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff44c4630 a2=0 a3=1 items=0 ppid=2782 pid=2836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.395000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 00:18:36.395000 audit[2837]: NETFILTER_CFG table=filter:45 family=10 entries=1 op=nft_register_chain pid=2837 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.395000 audit[2837]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcd0bd5a0 a2=0 a3=1 items=0 ppid=2782 pid=2837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.395000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 00:18:36.400000 audit[2838]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=2838 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.400000 audit[2838]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdb500610 a2=0 a3=1 items=0 ppid=2782 pid=2838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.400000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 00:18:36.490000 audit[2839]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2839 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.490000 audit[2839]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd9a3ff00 a2=0 a3=1 items=0 ppid=2782 pid=2839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.490000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Nov 1 00:18:36.500000 audit[2841]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2841 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.500000 audit[2841]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd4fef960 a2=0 a3=1 items=0 ppid=2782 pid=2841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.500000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Nov 1 00:18:36.510000 audit[2844]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_rule pid=2844 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.510000 audit[2844]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcd4c2730 a2=0 a3=1 items=0 ppid=2782 pid=2844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.510000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Nov 1 00:18:36.515000 audit[2845]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2845 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.515000 audit[2845]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc375d140 a2=0 a3=1 items=0 ppid=2782 pid=2845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.515000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Nov 1 00:18:36.525000 audit[2847]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2847 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.525000 audit[2847]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffde4b1440 a2=0 a3=1 items=0 ppid=2782 pid=2847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.525000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Nov 1 00:18:36.525000 audit[2848]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2848 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.525000 audit[2848]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe045d2d0 a2=0 a3=1 items=0 ppid=2782 pid=2848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.525000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Nov 1 00:18:36.530000 audit[2850]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2850 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.530000 audit[2850]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff4366a90 a2=0 a3=1 items=0 ppid=2782 pid=2850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.530000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Nov 1 00:18:36.540000 audit[2853]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=2853 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.540000 audit[2853]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff737af20 a2=0 a3=1 items=0 ppid=2782 pid=2853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.540000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Nov 1 00:18:36.545000 audit[2854]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_chain pid=2854 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.545000 audit[2854]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeddfc3b0 a2=0 a3=1 items=0 ppid=2782 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.545000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Nov 1 00:18:36.552000 audit[2856]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2856 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.552000 audit[2856]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffce687200 a2=0 a3=1 items=0 ppid=2782 pid=2856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.552000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Nov 1 00:18:36.552000 audit[2857]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=2857 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.552000 audit[2857]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe45ac8c0 a2=0 a3=1 items=0 ppid=2782 pid=2857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.552000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Nov 1 00:18:36.557000 audit[2859]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_rule pid=2859 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.557000 audit[2859]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff468e320 a2=0 a3=1 items=0 ppid=2782 pid=2859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.557000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 00:18:36.562000 audit[2862]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_rule pid=2862 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.562000 audit[2862]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff9452b50 a2=0 a3=1 items=0 ppid=2782 pid=2862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.562000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 00:18:36.579000 audit[2865]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_rule pid=2865 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.579000 audit[2865]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffff80bc70 a2=0 a3=1 items=0 ppid=2782 pid=2865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.579000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Nov 1 00:18:36.581000 audit[2866]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2866 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.581000 audit[2866]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff67bd1c0 a2=0 a3=1 items=0 ppid=2782 pid=2866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.581000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Nov 1 00:18:36.583000 audit[2868]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2868 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.583000 audit[2868]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff5980b70 a2=0 a3=1 items=0 ppid=2782 pid=2868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.583000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:18:36.586000 audit[2871]: NETFILTER_CFG table=nat:63 family=2 entries=1 op=nft_register_rule pid=2871 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.586000 audit[2871]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffe9bed80 a2=0 a3=1 items=0 ppid=2782 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.586000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:18:36.587000 audit[2872]: NETFILTER_CFG table=nat:64 family=2 entries=1 op=nft_register_chain pid=2872 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.587000 audit[2872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef5fb490 a2=0 a3=1 items=0 ppid=2782 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.587000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Nov 1 00:18:36.589000 audit[2874]: NETFILTER_CFG table=nat:65 family=2 entries=1 op=nft_register_rule pid=2874 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:18:36.589000 audit[2874]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffd0e696d0 a2=0 a3=1 items=0 ppid=2782 pid=2874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.589000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Nov 1 00:18:36.607153 env[1586]: time="2025-11-01T00:18:36.607072456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:18:36.607313 env[1586]: time="2025-11-01T00:18:36.607168576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:18:36.607313 env[1586]: time="2025-11-01T00:18:36.607194936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:36.607500 env[1586]: time="2025-11-01T00:18:36.607438256Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ceca0d06b7a19e437f2c7cb29e705f44dbe06a79ea3342e6e4bcea387cfdee6 pid=2890 runtime=io.containerd.runc.v2 Nov 1 00:18:36.653479 env[1586]: time="2025-11-01T00:18:36.651900704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-vhkcl,Uid:112a9854-3484-4215-96cf-ea834060b019,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5ceca0d06b7a19e437f2c7cb29e705f44dbe06a79ea3342e6e4bcea387cfdee6\"" Nov 1 00:18:36.655323 env[1586]: time="2025-11-01T00:18:36.654891899Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:18:36.715000 audit[2880]: NETFILTER_CFG table=filter:66 family=2 entries=8 op=nft_register_rule pid=2880 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:18:36.715000 audit[2880]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffde542080 a2=0 a3=1 items=0 ppid=2782 pid=2880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.715000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:18:36.771000 audit[2880]: NETFILTER_CFG table=nat:67 family=2 entries=14 op=nft_register_chain pid=2880 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:18:36.771000 audit[2880]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffde542080 a2=0 a3=1 items=0 ppid=2782 pid=2880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.771000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:18:36.772000 audit[2926]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2926 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.772000 audit[2926]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffdcffad10 a2=0 a3=1 items=0 ppid=2782 pid=2926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.772000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Nov 1 00:18:36.775000 audit[2928]: NETFILTER_CFG table=filter:69 family=10 entries=2 op=nft_register_chain pid=2928 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.775000 audit[2928]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd9c58d00 a2=0 a3=1 items=0 ppid=2782 pid=2928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.775000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Nov 1 00:18:36.779000 audit[2931]: NETFILTER_CFG table=filter:70 family=10 entries=2 op=nft_register_chain pid=2931 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.779000 audit[2931]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffdff68870 a2=0 a3=1 items=0 ppid=2782 pid=2931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.779000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Nov 1 00:18:36.781000 audit[2932]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_chain pid=2932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.781000 audit[2932]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc2bb6b00 a2=0 a3=1 items=0 ppid=2782 pid=2932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.781000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Nov 1 00:18:36.784000 audit[2934]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=2934 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.784000 audit[2934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdf798500 a2=0 a3=1 items=0 ppid=2782 pid=2934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.784000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Nov 1 00:18:36.785000 audit[2935]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2935 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.785000 audit[2935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffddd44fe0 a2=0 a3=1 items=0 ppid=2782 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.785000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Nov 1 00:18:36.787000 audit[2937]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2937 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.787000 audit[2937]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdc97e890 a2=0 a3=1 items=0 ppid=2782 pid=2937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.787000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Nov 1 00:18:36.790000 audit[2940]: NETFILTER_CFG table=filter:75 family=10 entries=2 op=nft_register_chain pid=2940 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.790000 audit[2940]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffdf17e0f0 a2=0 a3=1 items=0 ppid=2782 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.790000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Nov 1 00:18:36.791000 audit[2941]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_chain pid=2941 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.791000 audit[2941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff5093bf0 a2=0 a3=1 items=0 ppid=2782 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.791000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Nov 1 00:18:36.793000 audit[2943]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2943 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.793000 audit[2943]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffebde7ac0 a2=0 a3=1 items=0 ppid=2782 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.793000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Nov 1 00:18:36.794000 audit[2944]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_chain pid=2944 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.794000 audit[2944]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffeee3900 a2=0 a3=1 items=0 ppid=2782 pid=2944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.794000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Nov 1 00:18:36.796000 audit[2946]: NETFILTER_CFG table=filter:79 family=10 entries=1 op=nft_register_rule pid=2946 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.796000 audit[2946]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdec39ae0 a2=0 a3=1 items=0 ppid=2782 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.796000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 00:18:36.799000 audit[2949]: NETFILTER_CFG table=filter:80 family=10 entries=1 op=nft_register_rule pid=2949 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.799000 audit[2949]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffe4cc3a0 a2=0 a3=1 items=0 ppid=2782 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.799000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Nov 1 00:18:36.802000 audit[2952]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_rule pid=2952 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.802000 audit[2952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff9d4fc00 a2=0 a3=1 items=0 ppid=2782 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.802000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Nov 1 00:18:36.803000 audit[2953]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2953 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.803000 audit[2953]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffee7b2370 a2=0 a3=1 items=0 ppid=2782 pid=2953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.803000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Nov 1 00:18:36.805000 audit[2955]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2955 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.805000 audit[2955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffe9c9c230 a2=0 a3=1 items=0 ppid=2782 pid=2955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.805000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:18:36.808000 audit[2958]: NETFILTER_CFG table=nat:84 family=10 entries=2 op=nft_register_chain pid=2958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.808000 audit[2958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffd318e330 a2=0 a3=1 items=0 ppid=2782 pid=2958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.808000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:18:36.809000 audit[2959]: NETFILTER_CFG table=nat:85 family=10 entries=1 op=nft_register_chain pid=2959 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.809000 audit[2959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffddda9270 a2=0 a3=1 items=0 ppid=2782 pid=2959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.809000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Nov 1 00:18:36.811000 audit[2961]: NETFILTER_CFG table=nat:86 family=10 entries=2 op=nft_register_chain pid=2961 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.811000 audit[2961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd6c97b80 a2=0 a3=1 items=0 ppid=2782 pid=2961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.811000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Nov 1 00:18:36.813000 audit[2962]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=2962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.813000 audit[2962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb1cfc50 a2=0 a3=1 items=0 ppid=2782 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.813000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Nov 1 00:18:36.815000 audit[2964]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=2964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.815000 audit[2964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffda4c2ea0 a2=0 a3=1 items=0 ppid=2782 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.815000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:18:36.818000 audit[2967]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_rule pid=2967 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:18:36.818000 audit[2967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffdef2b50 a2=0 a3=1 items=0 ppid=2782 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.818000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:18:36.820000 audit[2969]: NETFILTER_CFG table=filter:90 family=10 entries=3 op=nft_register_rule pid=2969 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 00:18:36.820000 audit[2969]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=fffff22063c0 a2=0 a3=1 items=0 ppid=2782 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.820000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:18:36.821000 audit[2969]: NETFILTER_CFG table=nat:91 family=10 entries=7 op=nft_register_chain pid=2969 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 00:18:36.821000 audit[2969]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=fffff22063c0 a2=0 a3=1 items=0 ppid=2782 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:36.821000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:18:38.248795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2821160811.mount: Deactivated successfully. Nov 1 00:18:38.837493 kubelet[2677]: I1101 00:18:38.837339 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l9b4s" podStartSLOduration=3.837319666 podStartE2EDuration="3.837319666s" podCreationTimestamp="2025-11-01 00:18:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:18:36.562934848 +0000 UTC m=+6.373251922" watchObservedRunningTime="2025-11-01 00:18:38.837319666 +0000 UTC m=+8.647636740" Nov 1 00:18:38.903789 env[1586]: time="2025-11-01T00:18:38.903748004Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:38.909927 env[1586]: time="2025-11-01T00:18:38.909891154Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:38.913917 env[1586]: time="2025-11-01T00:18:38.913877188Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:38.917690 env[1586]: time="2025-11-01T00:18:38.917662782Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:38.918219 env[1586]: time="2025-11-01T00:18:38.918192781Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 1 00:18:38.922303 env[1586]: time="2025-11-01T00:18:38.922259695Z" level=info msg="CreateContainer within sandbox \"5ceca0d06b7a19e437f2c7cb29e705f44dbe06a79ea3342e6e4bcea387cfdee6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:18:38.948890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1121781515.mount: Deactivated successfully. Nov 1 00:18:38.963952 env[1586]: time="2025-11-01T00:18:38.963891591Z" level=info msg="CreateContainer within sandbox \"5ceca0d06b7a19e437f2c7cb29e705f44dbe06a79ea3342e6e4bcea387cfdee6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"076fa378d5e78373c7c1d4b10b8351334b5885b7d804ef26bc276e4f93511dcf\"" Nov 1 00:18:38.965533 env[1586]: time="2025-11-01T00:18:38.965508988Z" level=info msg="StartContainer for \"076fa378d5e78373c7c1d4b10b8351334b5885b7d804ef26bc276e4f93511dcf\"" Nov 1 00:18:39.013593 env[1586]: time="2025-11-01T00:18:39.013553274Z" level=info msg="StartContainer for \"076fa378d5e78373c7c1d4b10b8351334b5885b7d804ef26bc276e4f93511dcf\" returns successfully" Nov 1 00:18:39.204730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2800105356.mount: Deactivated successfully. Nov 1 00:18:40.941813 kubelet[2677]: I1101 00:18:40.941757 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-vhkcl" podStartSLOduration=2.675904832 podStartE2EDuration="4.94174079s" podCreationTimestamp="2025-11-01 00:18:36 +0000 UTC" firstStartedPulling="2025-11-01 00:18:36.653891021 +0000 UTC m=+6.464208095" lastFinishedPulling="2025-11-01 00:18:38.919727019 +0000 UTC m=+8.730044053" observedRunningTime="2025-11-01 00:18:39.526490819 +0000 UTC m=+9.336807893" watchObservedRunningTime="2025-11-01 00:18:40.94174079 +0000 UTC m=+10.752057864" Nov 1 00:18:45.068350 sudo[2005]: pam_unix(sudo:session): session closed for user root Nov 1 00:18:45.067000 audit[2005]: USER_END pid=2005 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:18:45.073845 kernel: kauditd_printk_skb: 143 callbacks suppressed Nov 1 00:18:45.073951 kernel: audit: type=1106 audit(1761956325.067:295): pid=2005 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:18:45.096000 audit[2005]: CRED_DISP pid=2005 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:18:45.141561 kernel: audit: type=1104 audit(1761956325.096:296): pid=2005 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:18:45.176653 sshd[2001]: pam_unix(sshd:session): session closed for user core Nov 1 00:18:45.176000 audit[2001]: USER_END pid=2001 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:18:45.189615 systemd[1]: sshd@6-10.200.20.42:22-10.200.16.10:54380.service: Deactivated successfully. Nov 1 00:18:45.191150 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:18:45.207791 systemd-logind[1567]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:18:45.208619 systemd-logind[1567]: Removed session 9. Nov 1 00:18:45.176000 audit[2001]: CRED_DISP pid=2001 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:18:45.235958 kernel: audit: type=1106 audit(1761956325.176:297): pid=2001 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:18:45.236083 kernel: audit: type=1104 audit(1761956325.176:298): pid=2001 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:18:45.236114 kernel: audit: type=1131 audit(1761956325.188:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.42:22-10.200.16.10:54380 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:45.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.42:22-10.200.16.10:54380 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:18:49.801000 audit[3052]: NETFILTER_CFG table=filter:92 family=2 entries=14 op=nft_register_rule pid=3052 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:18:49.801000 audit[3052]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc40db770 a2=0 a3=1 items=0 ppid=2782 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:49.855566 kernel: audit: type=1325 audit(1761956329.801:300): table=filter:92 family=2 entries=14 op=nft_register_rule pid=3052 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:18:49.855721 kernel: audit: type=1300 audit(1761956329.801:300): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc40db770 a2=0 a3=1 items=0 ppid=2782 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:49.801000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:18:49.912127 kernel: audit: type=1327 audit(1761956329.801:300): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:18:49.911000 audit[3052]: NETFILTER_CFG table=nat:93 family=2 entries=12 op=nft_register_rule pid=3052 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:18:49.927222 kernel: audit: type=1325 audit(1761956329.911:301): table=nat:93 family=2 entries=12 op=nft_register_rule pid=3052 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:18:49.927364 kernel: audit: type=1300 audit(1761956329.911:301): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc40db770 a2=0 a3=1 items=0 ppid=2782 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:49.911000 audit[3052]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc40db770 a2=0 a3=1 items=0 ppid=2782 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:49.911000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:18:49.969000 audit[3054]: NETFILTER_CFG table=filter:94 family=2 entries=15 op=nft_register_rule pid=3054 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:18:49.969000 audit[3054]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffed4f8350 a2=0 a3=1 items=0 ppid=2782 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:49.969000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:18:49.975000 audit[3054]: NETFILTER_CFG table=nat:95 family=2 entries=12 op=nft_register_rule pid=3054 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:18:49.975000 audit[3054]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffed4f8350 a2=0 a3=1 items=0 ppid=2782 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:49.975000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:18:54.528000 audit[3056]: NETFILTER_CFG table=filter:96 family=2 entries=17 op=nft_register_rule pid=3056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:18:54.534869 kernel: kauditd_printk_skb: 7 callbacks suppressed Nov 1 00:18:54.534978 kernel: audit: type=1325 audit(1761956334.528:304): table=filter:96 family=2 entries=17 op=nft_register_rule pid=3056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:18:54.528000 audit[3056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe5e2c520 a2=0 a3=1 items=0 ppid=2782 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:54.586875 kernel: audit: type=1300 audit(1761956334.528:304): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe5e2c520 a2=0 a3=1 items=0 ppid=2782 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:54.528000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:18:54.601827 kernel: audit: type=1327 audit(1761956334.528:304): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:18:54.557000 audit[3056]: NETFILTER_CFG table=nat:97 family=2 entries=12 op=nft_register_rule pid=3056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:18:54.616987 kernel: audit: type=1325 audit(1761956334.557:305): table=nat:97 family=2 entries=12 op=nft_register_rule pid=3056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:18:54.557000 audit[3056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe5e2c520 a2=0 a3=1 items=0 ppid=2782 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:54.646662 kernel: audit: type=1300 audit(1761956334.557:305): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe5e2c520 a2=0 a3=1 items=0 ppid=2782 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:54.557000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:18:54.662726 kernel: audit: type=1327 audit(1761956334.557:305): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:18:54.686000 audit[3058]: NETFILTER_CFG table=filter:98 family=2 entries=18 op=nft_register_rule pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:18:54.686000 audit[3058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff034a890 a2=0 a3=1 items=0 ppid=2782 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:54.732877 kernel: audit: type=1325 audit(1761956334.686:306): table=filter:98 family=2 entries=18 op=nft_register_rule pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:18:54.733023 kernel: audit: type=1300 audit(1761956334.686:306): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff034a890 a2=0 a3=1 items=0 ppid=2782 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:54.686000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:18:54.748457 kernel: audit: type=1327 audit(1761956334.686:306): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:18:54.747000 audit[3058]: NETFILTER_CFG table=nat:99 family=2 entries=12 op=nft_register_rule pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:18:54.747000 audit[3058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff034a890 a2=0 a3=1 items=0 ppid=2782 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:54.747000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:18:54.766307 kernel: audit: type=1325 audit(1761956334.747:307): table=nat:99 family=2 entries=12 op=nft_register_rule pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:18:55.778000 audit[3060]: NETFILTER_CFG table=filter:100 family=2 entries=19 op=nft_register_rule pid=3060 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:18:55.778000 audit[3060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffee51d8e0 a2=0 a3=1 items=0 ppid=2782 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:55.778000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:18:55.783000 audit[3060]: NETFILTER_CFG table=nat:101 family=2 entries=12 op=nft_register_rule pid=3060 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:18:55.783000 audit[3060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffee51d8e0 a2=0 a3=1 items=0 ppid=2782 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:55.783000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:18:57.386000 audit[3062]: NETFILTER_CFG table=filter:102 family=2 entries=21 op=nft_register_rule pid=3062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:18:57.386000 audit[3062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe3f196a0 a2=0 a3=1 items=0 ppid=2782 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:57.386000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:18:57.393000 audit[3062]: NETFILTER_CFG table=nat:103 family=2 entries=12 op=nft_register_rule pid=3062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:18:57.393000 audit[3062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe3f196a0 a2=0 a3=1 items=0 ppid=2782 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:57.393000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:18:57.506059 kubelet[2677]: I1101 00:18:57.506018 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4c7a8e0e-d2ea-4e9e-a274-1dbae1548147-typha-certs\") pod \"calico-typha-6dbd98746d-jt8xz\" (UID: \"4c7a8e0e-d2ea-4e9e-a274-1dbae1548147\") " pod="calico-system/calico-typha-6dbd98746d-jt8xz" Nov 1 00:18:57.506059 kubelet[2677]: I1101 00:18:57.506060 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49wwt\" (UniqueName: \"kubernetes.io/projected/4c7a8e0e-d2ea-4e9e-a274-1dbae1548147-kube-api-access-49wwt\") pod \"calico-typha-6dbd98746d-jt8xz\" (UID: \"4c7a8e0e-d2ea-4e9e-a274-1dbae1548147\") " pod="calico-system/calico-typha-6dbd98746d-jt8xz" Nov 1 00:18:57.506510 kubelet[2677]: I1101 00:18:57.506081 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c7a8e0e-d2ea-4e9e-a274-1dbae1548147-tigera-ca-bundle\") pod \"calico-typha-6dbd98746d-jt8xz\" (UID: \"4c7a8e0e-d2ea-4e9e-a274-1dbae1548147\") " pod="calico-system/calico-typha-6dbd98746d-jt8xz" Nov 1 00:18:57.706808 kubelet[2677]: I1101 00:18:57.706757 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d58805c0-1b3c-4dc3-ad04-c8ca427411f4-var-run-calico\") pod \"calico-node-zth44\" (UID: \"d58805c0-1b3c-4dc3-ad04-c8ca427411f4\") " pod="calico-system/calico-node-zth44" Nov 1 00:18:57.706808 kubelet[2677]: I1101 00:18:57.706807 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d58805c0-1b3c-4dc3-ad04-c8ca427411f4-cni-bin-dir\") pod \"calico-node-zth44\" (UID: \"d58805c0-1b3c-4dc3-ad04-c8ca427411f4\") " pod="calico-system/calico-node-zth44" Nov 1 00:18:57.706965 kubelet[2677]: I1101 00:18:57.706824 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d58805c0-1b3c-4dc3-ad04-c8ca427411f4-xtables-lock\") pod \"calico-node-zth44\" (UID: \"d58805c0-1b3c-4dc3-ad04-c8ca427411f4\") " pod="calico-system/calico-node-zth44" Nov 1 00:18:57.706965 kubelet[2677]: I1101 00:18:57.706840 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d58805c0-1b3c-4dc3-ad04-c8ca427411f4-policysync\") pod \"calico-node-zth44\" (UID: \"d58805c0-1b3c-4dc3-ad04-c8ca427411f4\") " pod="calico-system/calico-node-zth44" Nov 1 00:18:57.706965 kubelet[2677]: I1101 00:18:57.706866 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d58805c0-1b3c-4dc3-ad04-c8ca427411f4-tigera-ca-bundle\") pod \"calico-node-zth44\" (UID: \"d58805c0-1b3c-4dc3-ad04-c8ca427411f4\") " pod="calico-system/calico-node-zth44" Nov 1 00:18:57.706965 kubelet[2677]: I1101 00:18:57.706880 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d58805c0-1b3c-4dc3-ad04-c8ca427411f4-var-lib-calico\") pod \"calico-node-zth44\" (UID: \"d58805c0-1b3c-4dc3-ad04-c8ca427411f4\") " pod="calico-system/calico-node-zth44" Nov 1 00:18:57.706965 kubelet[2677]: I1101 00:18:57.706894 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrvtg\" (UniqueName: \"kubernetes.io/projected/d58805c0-1b3c-4dc3-ad04-c8ca427411f4-kube-api-access-wrvtg\") pod \"calico-node-zth44\" (UID: \"d58805c0-1b3c-4dc3-ad04-c8ca427411f4\") " pod="calico-system/calico-node-zth44" Nov 1 00:18:57.707084 kubelet[2677]: I1101 00:18:57.706914 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d58805c0-1b3c-4dc3-ad04-c8ca427411f4-node-certs\") pod \"calico-node-zth44\" (UID: \"d58805c0-1b3c-4dc3-ad04-c8ca427411f4\") " pod="calico-system/calico-node-zth44" Nov 1 00:18:57.707084 kubelet[2677]: I1101 00:18:57.706941 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d58805c0-1b3c-4dc3-ad04-c8ca427411f4-cni-net-dir\") pod \"calico-node-zth44\" (UID: \"d58805c0-1b3c-4dc3-ad04-c8ca427411f4\") " pod="calico-system/calico-node-zth44" Nov 1 00:18:57.707084 kubelet[2677]: I1101 00:18:57.706957 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d58805c0-1b3c-4dc3-ad04-c8ca427411f4-cni-log-dir\") pod \"calico-node-zth44\" (UID: \"d58805c0-1b3c-4dc3-ad04-c8ca427411f4\") " pod="calico-system/calico-node-zth44" Nov 1 00:18:57.707084 kubelet[2677]: I1101 00:18:57.706973 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d58805c0-1b3c-4dc3-ad04-c8ca427411f4-lib-modules\") pod \"calico-node-zth44\" (UID: \"d58805c0-1b3c-4dc3-ad04-c8ca427411f4\") " pod="calico-system/calico-node-zth44" Nov 1 00:18:57.707084 kubelet[2677]: I1101 00:18:57.706989 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d58805c0-1b3c-4dc3-ad04-c8ca427411f4-flexvol-driver-host\") pod \"calico-node-zth44\" (UID: \"d58805c0-1b3c-4dc3-ad04-c8ca427411f4\") " pod="calico-system/calico-node-zth44" Nov 1 00:18:57.735966 env[1586]: time="2025-11-01T00:18:57.735412042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dbd98746d-jt8xz,Uid:4c7a8e0e-d2ea-4e9e-a274-1dbae1548147,Namespace:calico-system,Attempt:0,}" Nov 1 00:18:57.770222 env[1586]: time="2025-11-01T00:18:57.770054685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:18:57.770222 env[1586]: time="2025-11-01T00:18:57.770090205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:18:57.770222 env[1586]: time="2025-11-01T00:18:57.770100205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:57.770452 env[1586]: time="2025-11-01T00:18:57.770296485Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1153b987ffa7ac78ffa28cd523fe05fee7992037b83926d4bc4a29f095755d7 pid=3073 runtime=io.containerd.runc.v2 Nov 1 00:18:57.808789 kubelet[2677]: E1101 00:18:57.808757 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.808948 kubelet[2677]: W1101 00:18:57.808933 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.809122 kubelet[2677]: E1101 00:18:57.809109 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.809528 kubelet[2677]: E1101 00:18:57.809515 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.809636 kubelet[2677]: W1101 00:18:57.809623 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.809728 kubelet[2677]: E1101 00:18:57.809716 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.812684 kubelet[2677]: E1101 00:18:57.812652 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.812684 kubelet[2677]: W1101 00:18:57.812672 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.812781 kubelet[2677]: E1101 00:18:57.812690 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.812857 kubelet[2677]: E1101 00:18:57.812834 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.812857 kubelet[2677]: W1101 00:18:57.812849 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.812931 kubelet[2677]: E1101 00:18:57.812861 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.812990 kubelet[2677]: E1101 00:18:57.812973 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.812990 kubelet[2677]: W1101 00:18:57.812985 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.813059 kubelet[2677]: E1101 00:18:57.812996 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.813178 kubelet[2677]: E1101 00:18:57.813153 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.813178 kubelet[2677]: W1101 00:18:57.813166 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.813178 kubelet[2677]: E1101 00:18:57.813175 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.813347 kubelet[2677]: E1101 00:18:57.813325 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.813347 kubelet[2677]: W1101 00:18:57.813339 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.813415 kubelet[2677]: E1101 00:18:57.813349 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.815333 kubelet[2677]: E1101 00:18:57.813466 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.815333 kubelet[2677]: W1101 00:18:57.813477 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.815333 kubelet[2677]: E1101 00:18:57.813487 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.815333 kubelet[2677]: E1101 00:18:57.813653 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.815333 kubelet[2677]: W1101 00:18:57.813662 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.815333 kubelet[2677]: E1101 00:18:57.813672 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.815333 kubelet[2677]: E1101 00:18:57.813918 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.815333 kubelet[2677]: W1101 00:18:57.813928 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.815333 kubelet[2677]: E1101 00:18:57.813939 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.821053 kubelet[2677]: E1101 00:18:57.821037 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.821158 kubelet[2677]: W1101 00:18:57.821144 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.821248 kubelet[2677]: E1101 00:18:57.821236 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.821526 kubelet[2677]: E1101 00:18:57.821515 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.821620 kubelet[2677]: W1101 00:18:57.821607 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.821765 kubelet[2677]: E1101 00:18:57.821739 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.821893 kubelet[2677]: E1101 00:18:57.821882 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.821968 kubelet[2677]: W1101 00:18:57.821957 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.822047 kubelet[2677]: E1101 00:18:57.822036 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.822302 kubelet[2677]: E1101 00:18:57.822279 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.822396 kubelet[2677]: W1101 00:18:57.822382 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.822543 kubelet[2677]: E1101 00:18:57.822530 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.822822 kubelet[2677]: E1101 00:18:57.822808 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.822912 kubelet[2677]: W1101 00:18:57.822899 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.822990 kubelet[2677]: E1101 00:18:57.822979 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.823230 kubelet[2677]: E1101 00:18:57.823210 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.823320 kubelet[2677]: W1101 00:18:57.823239 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.823320 kubelet[2677]: E1101 00:18:57.823267 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.824191 kubelet[2677]: E1101 00:18:57.824172 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.824297 kubelet[2677]: W1101 00:18:57.824270 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.824387 kubelet[2677]: E1101 00:18:57.824373 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.834484 kubelet[2677]: E1101 00:18:57.834455 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.834599 kubelet[2677]: W1101 00:18:57.834583 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.834666 kubelet[2677]: E1101 00:18:57.834651 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.838653 kubelet[2677]: E1101 00:18:57.838637 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.838989 kubelet[2677]: W1101 00:18:57.838972 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.839163 kubelet[2677]: E1101 00:18:57.839148 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.848682 kubelet[2677]: E1101 00:18:57.848651 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:18:57.882816 kubelet[2677]: E1101 00:18:57.882784 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.882816 kubelet[2677]: W1101 00:18:57.882808 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.882996 kubelet[2677]: E1101 00:18:57.882829 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.882996 kubelet[2677]: E1101 00:18:57.882955 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.882996 kubelet[2677]: W1101 00:18:57.882962 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.883065 kubelet[2677]: E1101 00:18:57.882999 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.883123 kubelet[2677]: E1101 00:18:57.883104 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.883123 kubelet[2677]: W1101 00:18:57.883116 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.883197 kubelet[2677]: E1101 00:18:57.883124 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.883250 kubelet[2677]: E1101 00:18:57.883234 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.883250 kubelet[2677]: W1101 00:18:57.883245 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.883327 kubelet[2677]: E1101 00:18:57.883252 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.883403 kubelet[2677]: E1101 00:18:57.883390 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.883403 kubelet[2677]: W1101 00:18:57.883401 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.883474 kubelet[2677]: E1101 00:18:57.883409 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.883527 kubelet[2677]: E1101 00:18:57.883511 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.883527 kubelet[2677]: W1101 00:18:57.883522 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.883597 kubelet[2677]: E1101 00:18:57.883531 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.883659 kubelet[2677]: E1101 00:18:57.883647 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.883659 kubelet[2677]: W1101 00:18:57.883656 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.883725 kubelet[2677]: E1101 00:18:57.883664 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.883779 kubelet[2677]: E1101 00:18:57.883763 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.883779 kubelet[2677]: W1101 00:18:57.883773 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.883852 kubelet[2677]: E1101 00:18:57.883780 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.883911 kubelet[2677]: E1101 00:18:57.883895 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.883911 kubelet[2677]: W1101 00:18:57.883906 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.883984 kubelet[2677]: E1101 00:18:57.883913 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.884071 kubelet[2677]: E1101 00:18:57.884018 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.884071 kubelet[2677]: W1101 00:18:57.884028 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.884071 kubelet[2677]: E1101 00:18:57.884038 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.884161 kubelet[2677]: E1101 00:18:57.884144 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.884161 kubelet[2677]: W1101 00:18:57.884150 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.884161 kubelet[2677]: E1101 00:18:57.884157 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.884276 kubelet[2677]: E1101 00:18:57.884264 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.884276 kubelet[2677]: W1101 00:18:57.884273 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.884351 kubelet[2677]: E1101 00:18:57.884280 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.884426 kubelet[2677]: E1101 00:18:57.884413 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.884426 kubelet[2677]: W1101 00:18:57.884422 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.884500 kubelet[2677]: E1101 00:18:57.884433 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.884565 kubelet[2677]: E1101 00:18:57.884535 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.884565 kubelet[2677]: W1101 00:18:57.884540 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.884565 kubelet[2677]: E1101 00:18:57.884547 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.886308 kubelet[2677]: E1101 00:18:57.884639 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.886308 kubelet[2677]: W1101 00:18:57.884650 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.886308 kubelet[2677]: E1101 00:18:57.884657 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.886308 kubelet[2677]: E1101 00:18:57.884799 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.886308 kubelet[2677]: W1101 00:18:57.884807 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.886308 kubelet[2677]: E1101 00:18:57.884814 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.886308 kubelet[2677]: E1101 00:18:57.884967 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.886308 kubelet[2677]: W1101 00:18:57.884974 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.886308 kubelet[2677]: E1101 00:18:57.884982 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.886308 kubelet[2677]: E1101 00:18:57.885083 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.886599 kubelet[2677]: W1101 00:18:57.885089 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.886599 kubelet[2677]: E1101 00:18:57.885096 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.886599 kubelet[2677]: E1101 00:18:57.885192 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.886599 kubelet[2677]: W1101 00:18:57.885197 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.886599 kubelet[2677]: E1101 00:18:57.885204 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.886599 kubelet[2677]: E1101 00:18:57.885350 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.886599 kubelet[2677]: W1101 00:18:57.885358 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.886599 kubelet[2677]: E1101 00:18:57.885366 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.908763 kubelet[2677]: E1101 00:18:57.908730 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.908919 kubelet[2677]: W1101 00:18:57.908905 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.909007 kubelet[2677]: E1101 00:18:57.908994 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.909091 kubelet[2677]: I1101 00:18:57.909078 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8e50a05e-0803-4e20-bd2b-ccf8c9d67c23-registration-dir\") pod \"csi-node-driver-4mt97\" (UID: \"8e50a05e-0803-4e20-bd2b-ccf8c9d67c23\") " pod="calico-system/csi-node-driver-4mt97" Nov 1 00:18:57.909348 kubelet[2677]: E1101 00:18:57.909336 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.909433 kubelet[2677]: W1101 00:18:57.909422 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.909519 kubelet[2677]: E1101 00:18:57.909507 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.909742 kubelet[2677]: E1101 00:18:57.909729 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.909822 kubelet[2677]: W1101 00:18:57.909810 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.909897 kubelet[2677]: E1101 00:18:57.909886 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.910019 kubelet[2677]: I1101 00:18:57.910007 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8e50a05e-0803-4e20-bd2b-ccf8c9d67c23-socket-dir\") pod \"csi-node-driver-4mt97\" (UID: \"8e50a05e-0803-4e20-bd2b-ccf8c9d67c23\") " pod="calico-system/csi-node-driver-4mt97" Nov 1 00:18:57.910205 kubelet[2677]: E1101 00:18:57.910195 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.910306 kubelet[2677]: W1101 00:18:57.910293 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.910384 kubelet[2677]: E1101 00:18:57.910372 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.910656 kubelet[2677]: E1101 00:18:57.910643 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.910742 kubelet[2677]: W1101 00:18:57.910730 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.910820 kubelet[2677]: E1101 00:18:57.910808 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.911030 kubelet[2677]: E1101 00:18:57.911016 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.911109 kubelet[2677]: W1101 00:18:57.911097 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.911182 kubelet[2677]: E1101 00:18:57.911159 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.911324 kubelet[2677]: I1101 00:18:57.911309 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gnml\" (UniqueName: \"kubernetes.io/projected/8e50a05e-0803-4e20-bd2b-ccf8c9d67c23-kube-api-access-4gnml\") pod \"csi-node-driver-4mt97\" (UID: \"8e50a05e-0803-4e20-bd2b-ccf8c9d67c23\") " pod="calico-system/csi-node-driver-4mt97" Nov 1 00:18:57.911502 kubelet[2677]: E1101 00:18:57.911491 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.911579 kubelet[2677]: W1101 00:18:57.911567 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.911658 kubelet[2677]: E1101 00:18:57.911647 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.911916 kubelet[2677]: E1101 00:18:57.911904 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.911990 kubelet[2677]: W1101 00:18:57.911979 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.912057 kubelet[2677]: E1101 00:18:57.912036 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.912258 kubelet[2677]: E1101 00:18:57.912246 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.912384 kubelet[2677]: W1101 00:18:57.912369 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.912462 kubelet[2677]: E1101 00:18:57.912451 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.912585 kubelet[2677]: I1101 00:18:57.912574 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8e50a05e-0803-4e20-bd2b-ccf8c9d67c23-kubelet-dir\") pod \"csi-node-driver-4mt97\" (UID: \"8e50a05e-0803-4e20-bd2b-ccf8c9d67c23\") " pod="calico-system/csi-node-driver-4mt97" Nov 1 00:18:57.912763 kubelet[2677]: E1101 00:18:57.912754 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.912847 kubelet[2677]: W1101 00:18:57.912835 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.912910 kubelet[2677]: E1101 00:18:57.912891 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.913143 kubelet[2677]: E1101 00:18:57.913132 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.913226 kubelet[2677]: W1101 00:18:57.913214 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.913301 kubelet[2677]: E1101 00:18:57.913290 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.921389 kubelet[2677]: E1101 00:18:57.921363 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.921495 kubelet[2677]: W1101 00:18:57.921483 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.921573 kubelet[2677]: E1101 00:18:57.921562 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.921716 kubelet[2677]: I1101 00:18:57.921704 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8e50a05e-0803-4e20-bd2b-ccf8c9d67c23-varrun\") pod \"csi-node-driver-4mt97\" (UID: \"8e50a05e-0803-4e20-bd2b-ccf8c9d67c23\") " pod="calico-system/csi-node-driver-4mt97" Nov 1 00:18:57.921860 kubelet[2677]: E1101 00:18:57.921833 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.921940 kubelet[2677]: W1101 00:18:57.921927 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.921999 kubelet[2677]: E1101 00:18:57.921988 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.922326 kubelet[2677]: E1101 00:18:57.922313 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.922431 kubelet[2677]: W1101 00:18:57.922419 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.922502 kubelet[2677]: E1101 00:18:57.922491 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.922730 kubelet[2677]: E1101 00:18:57.922718 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:57.922807 kubelet[2677]: W1101 00:18:57.922795 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:57.922879 kubelet[2677]: E1101 00:18:57.922867 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:57.936734 env[1586]: time="2025-11-01T00:18:57.936596669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dbd98746d-jt8xz,Uid:4c7a8e0e-d2ea-4e9e-a274-1dbae1548147,Namespace:calico-system,Attempt:0,} returns sandbox id \"f1153b987ffa7ac78ffa28cd523fe05fee7992037b83926d4bc4a29f095755d7\"" Nov 1 00:18:57.938063 env[1586]: time="2025-11-01T00:18:57.937868148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:18:57.947273 env[1586]: time="2025-11-01T00:18:57.946930938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zth44,Uid:d58805c0-1b3c-4dc3-ad04-c8ca427411f4,Namespace:calico-system,Attempt:0,}" Nov 1 00:18:57.982096 env[1586]: time="2025-11-01T00:18:57.981964621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:18:57.982214 env[1586]: time="2025-11-01T00:18:57.982010941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:18:57.982214 env[1586]: time="2025-11-01T00:18:57.982023501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:18:57.982214 env[1586]: time="2025-11-01T00:18:57.982126101Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0f9d2bcee92d2e0449055fb16895fd90990587bc10c9bc476665b557e27d73e pid=3180 runtime=io.containerd.runc.v2 Nov 1 00:18:58.023626 kubelet[2677]: E1101 00:18:58.023419 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.023626 kubelet[2677]: W1101 00:18:58.023439 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.023626 kubelet[2677]: E1101 00:18:58.023458 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.023994 kubelet[2677]: E1101 00:18:58.023867 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.023994 kubelet[2677]: W1101 00:18:58.023878 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.023994 kubelet[2677]: E1101 00:18:58.023896 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.024238 kubelet[2677]: E1101 00:18:58.024165 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.024238 kubelet[2677]: W1101 00:18:58.024177 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.024238 kubelet[2677]: E1101 00:18:58.024193 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.024422 kubelet[2677]: E1101 00:18:58.024397 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.024422 kubelet[2677]: W1101 00:18:58.024414 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.024499 kubelet[2677]: E1101 00:18:58.024431 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.024689 kubelet[2677]: E1101 00:18:58.024661 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.024689 kubelet[2677]: W1101 00:18:58.024683 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.024784 kubelet[2677]: E1101 00:18:58.024698 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.024898 kubelet[2677]: E1101 00:18:58.024859 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.024898 kubelet[2677]: W1101 00:18:58.024873 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.024898 kubelet[2677]: E1101 00:18:58.024890 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.025144 kubelet[2677]: E1101 00:18:58.025120 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.025144 kubelet[2677]: W1101 00:18:58.025135 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.025231 kubelet[2677]: E1101 00:18:58.025215 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.025382 kubelet[2677]: E1101 00:18:58.025362 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.025382 kubelet[2677]: W1101 00:18:58.025374 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.025551 kubelet[2677]: E1101 00:18:58.025452 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.025676 kubelet[2677]: E1101 00:18:58.025657 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.025676 kubelet[2677]: W1101 00:18:58.025670 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.025754 kubelet[2677]: E1101 00:18:58.025747 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.026308 kubelet[2677]: E1101 00:18:58.025886 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.026308 kubelet[2677]: W1101 00:18:58.025898 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.026308 kubelet[2677]: E1101 00:18:58.025961 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.026980 kubelet[2677]: E1101 00:18:58.026957 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.026980 kubelet[2677]: W1101 00:18:58.026973 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.027085 kubelet[2677]: E1101 00:18:58.026990 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.028485 kubelet[2677]: E1101 00:18:58.027164 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.028485 kubelet[2677]: W1101 00:18:58.027176 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.028715 kubelet[2677]: E1101 00:18:58.028603 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.029378 kubelet[2677]: E1101 00:18:58.029354 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.029378 kubelet[2677]: W1101 00:18:58.029372 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.029492 kubelet[2677]: E1101 00:18:58.029447 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.029598 kubelet[2677]: E1101 00:18:58.029582 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.029598 kubelet[2677]: W1101 00:18:58.029592 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.029671 kubelet[2677]: E1101 00:18:58.029652 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.030740 kubelet[2677]: E1101 00:18:58.030717 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.030740 kubelet[2677]: W1101 00:18:58.030735 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.030881 kubelet[2677]: E1101 00:18:58.030831 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.030956 kubelet[2677]: E1101 00:18:58.030942 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.030956 kubelet[2677]: W1101 00:18:58.030953 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.031087 kubelet[2677]: E1101 00:18:58.031010 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.031958 kubelet[2677]: E1101 00:18:58.031936 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.031958 kubelet[2677]: W1101 00:18:58.031951 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.032180 kubelet[2677]: E1101 00:18:58.032074 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.032407 kubelet[2677]: E1101 00:18:58.032384 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.032407 kubelet[2677]: W1101 00:18:58.032402 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.032497 kubelet[2677]: E1101 00:18:58.032485 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.032598 kubelet[2677]: E1101 00:18:58.032584 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.032598 kubelet[2677]: W1101 00:18:58.032595 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.032664 kubelet[2677]: E1101 00:18:58.032652 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.032832 kubelet[2677]: E1101 00:18:58.032812 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.032832 kubelet[2677]: W1101 00:18:58.032830 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.032908 kubelet[2677]: E1101 00:18:58.032846 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.033075 kubelet[2677]: E1101 00:18:58.033058 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.033075 kubelet[2677]: W1101 00:18:58.033072 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.033150 kubelet[2677]: E1101 00:18:58.033092 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.033330 kubelet[2677]: E1101 00:18:58.033311 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.033330 kubelet[2677]: W1101 00:18:58.033326 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.033420 kubelet[2677]: E1101 00:18:58.033340 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.035302 kubelet[2677]: E1101 00:18:58.033568 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.035302 kubelet[2677]: W1101 00:18:58.033581 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.035302 kubelet[2677]: E1101 00:18:58.033595 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.035302 kubelet[2677]: E1101 00:18:58.033786 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.035302 kubelet[2677]: W1101 00:18:58.033795 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.035302 kubelet[2677]: E1101 00:18:58.033870 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.035302 kubelet[2677]: E1101 00:18:58.034225 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.035302 kubelet[2677]: W1101 00:18:58.034236 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.035302 kubelet[2677]: E1101 00:18:58.034246 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.053421 kubelet[2677]: E1101 00:18:58.052663 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:18:58.053421 kubelet[2677]: W1101 00:18:58.052686 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:18:58.053421 kubelet[2677]: E1101 00:18:58.052703 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:18:58.062546 env[1586]: time="2025-11-01T00:18:58.062499137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zth44,Uid:d58805c0-1b3c-4dc3-ad04-c8ca427411f4,Namespace:calico-system,Attempt:0,} returns sandbox id \"a0f9d2bcee92d2e0449055fb16895fd90990587bc10c9bc476665b557e27d73e\"" Nov 1 00:18:58.417000 audit[3241]: NETFILTER_CFG table=filter:104 family=2 entries=22 op=nft_register_rule pid=3241 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:18:58.417000 audit[3241]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd432a270 a2=0 a3=1 items=0 ppid=2782 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:58.417000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:18:58.420000 audit[3241]: NETFILTER_CFG table=nat:105 family=2 entries=12 op=nft_register_rule pid=3241 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:18:58.420000 audit[3241]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd432a270 a2=0 a3=1 items=0 ppid=2782 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:18:58.420000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:18:59.021576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3028465060.mount: Deactivated successfully. Nov 1 00:18:59.352512 kubelet[2677]: E1101 00:18:59.352380 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:18:59.787886 env[1586]: time="2025-11-01T00:18:59.787835640Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:59.798049 env[1586]: time="2025-11-01T00:18:59.798010070Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:59.801278 env[1586]: time="2025-11-01T00:18:59.801240627Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:59.804436 env[1586]: time="2025-11-01T00:18:59.804400143Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:18:59.804954 env[1586]: time="2025-11-01T00:18:59.804925103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 1 00:18:59.814374 env[1586]: time="2025-11-01T00:18:59.814084974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:18:59.820665 env[1586]: time="2025-11-01T00:18:59.820610007Z" level=info msg="CreateContainer within sandbox \"f1153b987ffa7ac78ffa28cd523fe05fee7992037b83926d4bc4a29f095755d7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:18:59.854398 env[1586]: time="2025-11-01T00:18:59.854360132Z" level=info msg="CreateContainer within sandbox \"f1153b987ffa7ac78ffa28cd523fe05fee7992037b83926d4bc4a29f095755d7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b9f19a162d2eec036893744708c03fff1dcc084206cbc492b061f20caf002935\"" Nov 1 00:18:59.854958 env[1586]: time="2025-11-01T00:18:59.854929572Z" level=info msg="StartContainer for \"b9f19a162d2eec036893744708c03fff1dcc084206cbc492b061f20caf002935\"" Nov 1 00:18:59.918443 env[1586]: time="2025-11-01T00:18:59.918401347Z" level=info msg="StartContainer for \"b9f19a162d2eec036893744708c03fff1dcc084206cbc492b061f20caf002935\" returns successfully" Nov 1 00:19:00.601959 kubelet[2677]: E1101 00:19:00.601926 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.601959 kubelet[2677]: W1101 00:19:00.601950 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.602359 kubelet[2677]: E1101 00:19:00.601971 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.602359 kubelet[2677]: E1101 00:19:00.602100 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.602359 kubelet[2677]: W1101 00:19:00.602108 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.602359 kubelet[2677]: E1101 00:19:00.602116 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.602359 kubelet[2677]: E1101 00:19:00.602239 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.602359 kubelet[2677]: W1101 00:19:00.602246 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.602359 kubelet[2677]: E1101 00:19:00.602253 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.602523 kubelet[2677]: E1101 00:19:00.602399 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.602523 kubelet[2677]: W1101 00:19:00.602406 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.602523 kubelet[2677]: E1101 00:19:00.602416 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.602588 kubelet[2677]: E1101 00:19:00.602538 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.602588 kubelet[2677]: W1101 00:19:00.602545 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.602588 kubelet[2677]: E1101 00:19:00.602552 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.602674 kubelet[2677]: E1101 00:19:00.602653 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.602674 kubelet[2677]: W1101 00:19:00.602668 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.602738 kubelet[2677]: E1101 00:19:00.602676 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.602807 kubelet[2677]: E1101 00:19:00.602790 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.602807 kubelet[2677]: W1101 00:19:00.602802 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.602877 kubelet[2677]: E1101 00:19:00.602810 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.602941 kubelet[2677]: E1101 00:19:00.602927 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.602941 kubelet[2677]: W1101 00:19:00.602938 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.603000 kubelet[2677]: E1101 00:19:00.602946 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.603090 kubelet[2677]: E1101 00:19:00.603079 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.603090 kubelet[2677]: W1101 00:19:00.603089 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.603156 kubelet[2677]: E1101 00:19:00.603097 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.603219 kubelet[2677]: E1101 00:19:00.603205 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.603219 kubelet[2677]: W1101 00:19:00.603217 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.603298 kubelet[2677]: E1101 00:19:00.603225 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.603362 kubelet[2677]: E1101 00:19:00.603345 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.603362 kubelet[2677]: W1101 00:19:00.603358 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.603427 kubelet[2677]: E1101 00:19:00.603366 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.603497 kubelet[2677]: E1101 00:19:00.603484 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.603497 kubelet[2677]: W1101 00:19:00.603495 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.603559 kubelet[2677]: E1101 00:19:00.603503 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.603631 kubelet[2677]: E1101 00:19:00.603619 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.603631 kubelet[2677]: W1101 00:19:00.603629 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.603701 kubelet[2677]: E1101 00:19:00.603637 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.603760 kubelet[2677]: E1101 00:19:00.603747 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.603760 kubelet[2677]: W1101 00:19:00.603758 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.603828 kubelet[2677]: E1101 00:19:00.603765 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.603890 kubelet[2677]: E1101 00:19:00.603878 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.603890 kubelet[2677]: W1101 00:19:00.603889 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.603946 kubelet[2677]: E1101 00:19:00.603897 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.648523 kubelet[2677]: E1101 00:19:00.648500 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.648702 kubelet[2677]: W1101 00:19:00.648686 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.648784 kubelet[2677]: E1101 00:19:00.648762 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.649057 kubelet[2677]: E1101 00:19:00.649043 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.649143 kubelet[2677]: W1101 00:19:00.649131 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.649211 kubelet[2677]: E1101 00:19:00.649200 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.649473 kubelet[2677]: E1101 00:19:00.649462 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.649558 kubelet[2677]: W1101 00:19:00.649546 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.649619 kubelet[2677]: E1101 00:19:00.649609 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.649905 kubelet[2677]: E1101 00:19:00.649863 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.649982 kubelet[2677]: W1101 00:19:00.649970 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.650069 kubelet[2677]: E1101 00:19:00.650057 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.650350 kubelet[2677]: E1101 00:19:00.650336 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.650480 kubelet[2677]: W1101 00:19:00.650463 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.650569 kubelet[2677]: E1101 00:19:00.650558 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.650787 kubelet[2677]: E1101 00:19:00.650776 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.650863 kubelet[2677]: W1101 00:19:00.650851 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.650925 kubelet[2677]: E1101 00:19:00.650915 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.651191 kubelet[2677]: E1101 00:19:00.651179 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.651270 kubelet[2677]: W1101 00:19:00.651258 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.651361 kubelet[2677]: E1101 00:19:00.651346 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.651557 kubelet[2677]: E1101 00:19:00.651546 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.651629 kubelet[2677]: W1101 00:19:00.651618 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.651691 kubelet[2677]: E1101 00:19:00.651681 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.651907 kubelet[2677]: E1101 00:19:00.651896 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.651980 kubelet[2677]: W1101 00:19:00.651968 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.652042 kubelet[2677]: E1101 00:19:00.652032 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.652247 kubelet[2677]: E1101 00:19:00.652236 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.652346 kubelet[2677]: W1101 00:19:00.652333 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.652407 kubelet[2677]: E1101 00:19:00.652396 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.652631 kubelet[2677]: E1101 00:19:00.652620 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.652706 kubelet[2677]: W1101 00:19:00.652694 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.652769 kubelet[2677]: E1101 00:19:00.652756 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.652975 kubelet[2677]: E1101 00:19:00.652963 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.653048 kubelet[2677]: W1101 00:19:00.653035 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.653108 kubelet[2677]: E1101 00:19:00.653098 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.653410 kubelet[2677]: E1101 00:19:00.653396 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.653495 kubelet[2677]: W1101 00:19:00.653482 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.653556 kubelet[2677]: E1101 00:19:00.653546 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.654206 kubelet[2677]: E1101 00:19:00.654189 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.654303 kubelet[2677]: W1101 00:19:00.654278 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.654389 kubelet[2677]: E1101 00:19:00.654378 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.654778 kubelet[2677]: E1101 00:19:00.654765 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.654860 kubelet[2677]: W1101 00:19:00.654848 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.654927 kubelet[2677]: E1101 00:19:00.654916 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.655670 kubelet[2677]: E1101 00:19:00.655638 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.655670 kubelet[2677]: W1101 00:19:00.655662 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.656783 kubelet[2677]: E1101 00:19:00.655677 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.656783 kubelet[2677]: E1101 00:19:00.655918 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.656783 kubelet[2677]: W1101 00:19:00.655929 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.656783 kubelet[2677]: E1101 00:19:00.655942 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.664560 kubelet[2677]: E1101 00:19:00.663382 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:19:00.664560 kubelet[2677]: W1101 00:19:00.663425 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:19:00.664560 kubelet[2677]: E1101 00:19:00.663441 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:19:00.943933 env[1586]: time="2025-11-01T00:19:00.943807717Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:19:00.950602 env[1586]: time="2025-11-01T00:19:00.950561711Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:19:00.954597 env[1586]: time="2025-11-01T00:19:00.954572787Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:19:00.958866 env[1586]: time="2025-11-01T00:19:00.958841662Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:19:00.959678 env[1586]: time="2025-11-01T00:19:00.959145062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 1 00:19:00.963206 env[1586]: time="2025-11-01T00:19:00.963167938Z" level=info msg="CreateContainer within sandbox \"a0f9d2bcee92d2e0449055fb16895fd90990587bc10c9bc476665b557e27d73e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:19:01.000995 env[1586]: time="2025-11-01T00:19:01.000891580Z" level=info msg="CreateContainer within sandbox \"a0f9d2bcee92d2e0449055fb16895fd90990587bc10c9bc476665b557e27d73e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"63d8ff3ae7f87f18adff032d526f43dcf97dbd7d6858586407112588d30d8c0e\"" Nov 1 00:19:01.001796 env[1586]: time="2025-11-01T00:19:01.001753699Z" level=info msg="StartContainer for \"63d8ff3ae7f87f18adff032d526f43dcf97dbd7d6858586407112588d30d8c0e\"" Nov 1 00:19:01.025478 systemd[1]: run-containerd-runc-k8s.io-63d8ff3ae7f87f18adff032d526f43dcf97dbd7d6858586407112588d30d8c0e-runc.Mt7MZD.mount: Deactivated successfully. Nov 1 00:19:01.064160 env[1586]: time="2025-11-01T00:19:01.064096238Z" level=info msg="StartContainer for \"63d8ff3ae7f87f18adff032d526f43dcf97dbd7d6858586407112588d30d8c0e\" returns successfully" Nov 1 00:19:01.641129 kubelet[2677]: E1101 00:19:01.352685 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:19:01.641129 kubelet[2677]: I1101 00:19:01.537025 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:19:01.641129 kubelet[2677]: I1101 00:19:01.553958 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6dbd98746d-jt8xz" podStartSLOduration=2.685439481 podStartE2EDuration="4.553941515s" podCreationTimestamp="2025-11-01 00:18:57 +0000 UTC" firstStartedPulling="2025-11-01 00:18:57.937649068 +0000 UTC m=+27.747966142" lastFinishedPulling="2025-11-01 00:18:59.806151102 +0000 UTC m=+29.616468176" observedRunningTime="2025-11-01 00:19:00.551439911 +0000 UTC m=+30.361756985" watchObservedRunningTime="2025-11-01 00:19:01.553941515 +0000 UTC m=+31.364258589" Nov 1 00:19:01.702081 env[1586]: time="2025-11-01T00:19:01.702030609Z" level=info msg="shim disconnected" id=63d8ff3ae7f87f18adff032d526f43dcf97dbd7d6858586407112588d30d8c0e Nov 1 00:19:01.702081 env[1586]: time="2025-11-01T00:19:01.702074929Z" level=warning msg="cleaning up after shim disconnected" id=63d8ff3ae7f87f18adff032d526f43dcf97dbd7d6858586407112588d30d8c0e namespace=k8s.io Nov 1 00:19:01.702081 env[1586]: time="2025-11-01T00:19:01.702084369Z" level=info msg="cleaning up dead shim" Nov 1 00:19:01.708959 env[1586]: time="2025-11-01T00:19:01.708914322Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:19:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3376 runtime=io.containerd.runc.v2\n" Nov 1 00:19:01.985224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63d8ff3ae7f87f18adff032d526f43dcf97dbd7d6858586407112588d30d8c0e-rootfs.mount: Deactivated successfully. Nov 1 00:19:02.549783 env[1586]: time="2025-11-01T00:19:02.549521182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:19:03.352409 kubelet[2677]: E1101 00:19:03.352356 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:19:05.352038 kubelet[2677]: E1101 00:19:05.351984 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:19:05.388550 env[1586]: time="2025-11-01T00:19:05.388507654Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:19:05.397088 env[1586]: time="2025-11-01T00:19:05.394690448Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:19:05.398447 env[1586]: time="2025-11-01T00:19:05.398415644Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:19:05.405272 env[1586]: time="2025-11-01T00:19:05.402646521Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:19:05.405272 env[1586]: time="2025-11-01T00:19:05.403354960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 1 00:19:05.407982 env[1586]: time="2025-11-01T00:19:05.407937556Z" level=info msg="CreateContainer within sandbox \"a0f9d2bcee92d2e0449055fb16895fd90990587bc10c9bc476665b557e27d73e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:19:05.432430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount385835573.mount: Deactivated successfully. Nov 1 00:19:05.449996 env[1586]: time="2025-11-01T00:19:05.449937277Z" level=info msg="CreateContainer within sandbox \"a0f9d2bcee92d2e0449055fb16895fd90990587bc10c9bc476665b557e27d73e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ea8b7664e337134b4fa9dc5b737a5717ee5227f449352a2d1e59c26960eb0156\"" Nov 1 00:19:05.450717 env[1586]: time="2025-11-01T00:19:05.450690796Z" level=info msg="StartContainer for \"ea8b7664e337134b4fa9dc5b737a5717ee5227f449352a2d1e59c26960eb0156\"" Nov 1 00:19:05.517053 env[1586]: time="2025-11-01T00:19:05.517002455Z" level=info msg="StartContainer for \"ea8b7664e337134b4fa9dc5b737a5717ee5227f449352a2d1e59c26960eb0156\" returns successfully" Nov 1 00:19:06.428313 systemd[1]: run-containerd-runc-k8s.io-ea8b7664e337134b4fa9dc5b737a5717ee5227f449352a2d1e59c26960eb0156-runc.VfrNzU.mount: Deactivated successfully. Nov 1 00:19:06.750250 env[1586]: time="2025-11-01T00:19:06.750193806Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:19:06.756655 kubelet[2677]: I1101 00:19:06.756619 2677 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:19:06.774208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea8b7664e337134b4fa9dc5b737a5717ee5227f449352a2d1e59c26960eb0156-rootfs.mount: Deactivated successfully. Nov 1 00:19:06.807785 kubelet[2677]: W1101 00:19:06.807719 2677 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.8-n-c51a7922c9" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.8-n-c51a7922c9' and this object Nov 1 00:19:06.807785 kubelet[2677]: E1101 00:19:06.807781 2677 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-3510.3.8-n-c51a7922c9\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510.3.8-n-c51a7922c9' and this object" logger="UnhandledError" Nov 1 00:19:06.895760 kubelet[2677]: I1101 00:19:06.895725 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/baca364e-ee4d-4d71-abe5-6d4d260656e2-whisker-backend-key-pair\") pod \"whisker-69dc8bd568-8tbvd\" (UID: \"baca364e-ee4d-4d71-abe5-6d4d260656e2\") " pod="calico-system/whisker-69dc8bd568-8tbvd" Nov 1 00:19:07.596920 kubelet[2677]: I1101 00:19:06.895883 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzr2c\" (UniqueName: \"kubernetes.io/projected/57cd90f3-35a2-40bb-93fb-693c3ffcd73d-kube-api-access-gzr2c\") pod \"calico-kube-controllers-86c5674785-bs7n8\" (UID: \"57cd90f3-35a2-40bb-93fb-693c3ffcd73d\") " pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" Nov 1 00:19:07.596920 kubelet[2677]: I1101 00:19:06.895908 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7634ab8-ff62-48dd-9eee-61be2b01d0bb-config-volume\") pod \"coredns-668d6bf9bc-87vvp\" (UID: \"a7634ab8-ff62-48dd-9eee-61be2b01d0bb\") " pod="kube-system/coredns-668d6bf9bc-87vvp" Nov 1 00:19:07.596920 kubelet[2677]: I1101 00:19:06.895925 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e69bd0a-b324-4064-9086-3d6aa0d23b51-goldmane-ca-bundle\") pod \"goldmane-666569f655-pw8c5\" (UID: \"1e69bd0a-b324-4064-9086-3d6aa0d23b51\") " pod="calico-system/goldmane-666569f655-pw8c5" Nov 1 00:19:07.596920 kubelet[2677]: I1101 00:19:06.895941 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7dtq\" (UniqueName: \"kubernetes.io/projected/1e69bd0a-b324-4064-9086-3d6aa0d23b51-kube-api-access-x7dtq\") pod \"goldmane-666569f655-pw8c5\" (UID: \"1e69bd0a-b324-4064-9086-3d6aa0d23b51\") " pod="calico-system/goldmane-666569f655-pw8c5" Nov 1 00:19:07.596920 kubelet[2677]: I1101 00:19:06.895956 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1e69bd0a-b324-4064-9086-3d6aa0d23b51-goldmane-key-pair\") pod \"goldmane-666569f655-pw8c5\" (UID: \"1e69bd0a-b324-4064-9086-3d6aa0d23b51\") " pod="calico-system/goldmane-666569f655-pw8c5" Nov 1 00:19:07.597176 kubelet[2677]: I1101 00:19:06.895975 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/baca364e-ee4d-4d71-abe5-6d4d260656e2-whisker-ca-bundle\") pod \"whisker-69dc8bd568-8tbvd\" (UID: \"baca364e-ee4d-4d71-abe5-6d4d260656e2\") " pod="calico-system/whisker-69dc8bd568-8tbvd" Nov 1 00:19:07.597176 kubelet[2677]: I1101 00:19:06.895990 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ntxb\" (UniqueName: \"kubernetes.io/projected/a7634ab8-ff62-48dd-9eee-61be2b01d0bb-kube-api-access-4ntxb\") pod \"coredns-668d6bf9bc-87vvp\" (UID: \"a7634ab8-ff62-48dd-9eee-61be2b01d0bb\") " pod="kube-system/coredns-668d6bf9bc-87vvp" Nov 1 00:19:07.597176 kubelet[2677]: I1101 00:19:06.896006 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e69bd0a-b324-4064-9086-3d6aa0d23b51-config\") pod \"goldmane-666569f655-pw8c5\" (UID: \"1e69bd0a-b324-4064-9086-3d6aa0d23b51\") " pod="calico-system/goldmane-666569f655-pw8c5" Nov 1 00:19:07.597176 kubelet[2677]: I1101 00:19:06.896023 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/da0e9dac-d5af-4669-8132-3ec847bb81ba-calico-apiserver-certs\") pod \"calico-apiserver-6c8dcbbd64-qwkg7\" (UID: \"da0e9dac-d5af-4669-8132-3ec847bb81ba\") " pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" Nov 1 00:19:07.597176 kubelet[2677]: I1101 00:19:06.896038 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ab7373cc-dd84-417d-8edc-59fbf979f4b4-calico-apiserver-certs\") pod \"calico-apiserver-6c8dcbbd64-p85vf\" (UID: \"ab7373cc-dd84-417d-8edc-59fbf979f4b4\") " pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" Nov 1 00:19:07.597353 kubelet[2677]: I1101 00:19:06.896066 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4048da5-d286-44c8-9ec0-180e591b9eec-config-volume\") pod \"coredns-668d6bf9bc-q8677\" (UID: \"a4048da5-d286-44c8-9ec0-180e591b9eec\") " pod="kube-system/coredns-668d6bf9bc-q8677" Nov 1 00:19:07.597353 kubelet[2677]: I1101 00:19:06.896082 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zzld\" (UniqueName: \"kubernetes.io/projected/baca364e-ee4d-4d71-abe5-6d4d260656e2-kube-api-access-4zzld\") pod \"whisker-69dc8bd568-8tbvd\" (UID: \"baca364e-ee4d-4d71-abe5-6d4d260656e2\") " pod="calico-system/whisker-69dc8bd568-8tbvd" Nov 1 00:19:07.597353 kubelet[2677]: I1101 00:19:06.896101 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh9cr\" (UniqueName: \"kubernetes.io/projected/ab7373cc-dd84-417d-8edc-59fbf979f4b4-kube-api-access-kh9cr\") pod \"calico-apiserver-6c8dcbbd64-p85vf\" (UID: \"ab7373cc-dd84-417d-8edc-59fbf979f4b4\") " pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" Nov 1 00:19:07.597353 kubelet[2677]: I1101 00:19:06.896118 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57cd90f3-35a2-40bb-93fb-693c3ffcd73d-tigera-ca-bundle\") pod \"calico-kube-controllers-86c5674785-bs7n8\" (UID: \"57cd90f3-35a2-40bb-93fb-693c3ffcd73d\") " pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" Nov 1 00:19:07.597353 kubelet[2677]: I1101 00:19:06.896133 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfsnh\" (UniqueName: \"kubernetes.io/projected/a4048da5-d286-44c8-9ec0-180e591b9eec-kube-api-access-dfsnh\") pod \"coredns-668d6bf9bc-q8677\" (UID: \"a4048da5-d286-44c8-9ec0-180e591b9eec\") " pod="kube-system/coredns-668d6bf9bc-q8677" Nov 1 00:19:07.597488 kubelet[2677]: I1101 00:19:06.896149 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfrqc\" (UniqueName: \"kubernetes.io/projected/da0e9dac-d5af-4669-8132-3ec847bb81ba-kube-api-access-bfrqc\") pod \"calico-apiserver-6c8dcbbd64-qwkg7\" (UID: \"da0e9dac-d5af-4669-8132-3ec847bb81ba\") " pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" Nov 1 00:19:07.624847 env[1586]: time="2025-11-01T00:19:07.624801779Z" level=info msg="shim disconnected" id=ea8b7664e337134b4fa9dc5b737a5717ee5227f449352a2d1e59c26960eb0156 Nov 1 00:19:07.624847 env[1586]: time="2025-11-01T00:19:07.624844059Z" level=warning msg="cleaning up after shim disconnected" id=ea8b7664e337134b4fa9dc5b737a5717ee5227f449352a2d1e59c26960eb0156 namespace=k8s.io Nov 1 00:19:07.625014 env[1586]: time="2025-11-01T00:19:07.624853259Z" level=info msg="cleaning up dead shim" Nov 1 00:19:07.634065 env[1586]: time="2025-11-01T00:19:07.634020971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4mt97,Uid:8e50a05e-0803-4e20-bd2b-ccf8c9d67c23,Namespace:calico-system,Attempt:0,}" Nov 1 00:19:07.639824 env[1586]: time="2025-11-01T00:19:07.639778126Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:19:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3456 runtime=io.containerd.runc.v2\n" Nov 1 00:19:07.699615 env[1586]: time="2025-11-01T00:19:07.699578952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69dc8bd568-8tbvd,Uid:baca364e-ee4d-4d71-abe5-6d4d260656e2,Namespace:calico-system,Attempt:0,}" Nov 1 00:19:07.701239 env[1586]: time="2025-11-01T00:19:07.701196391Z" level=error msg="Failed to destroy network for sandbox \"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.701591 env[1586]: time="2025-11-01T00:19:07.701556751Z" level=error msg="encountered an error cleaning up failed sandbox \"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.701648 env[1586]: time="2025-11-01T00:19:07.701606150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4mt97,Uid:8e50a05e-0803-4e20-bd2b-ccf8c9d67c23,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.702228 kubelet[2677]: E1101 00:19:07.701784 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.702228 kubelet[2677]: E1101 00:19:07.701842 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4mt97" Nov 1 00:19:07.702228 kubelet[2677]: E1101 00:19:07.701863 2677 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4mt97" Nov 1 00:19:07.703576 kubelet[2677]: E1101 00:19:07.701924 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4mt97_calico-system(8e50a05e-0803-4e20-bd2b-ccf8c9d67c23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4mt97_calico-system(8e50a05e-0803-4e20-bd2b-ccf8c9d67c23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:19:07.719541 env[1586]: time="2025-11-01T00:19:07.719489894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86c5674785-bs7n8,Uid:57cd90f3-35a2-40bb-93fb-693c3ffcd73d,Namespace:calico-system,Attempt:0,}" Nov 1 00:19:07.728473 env[1586]: time="2025-11-01T00:19:07.728445846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pw8c5,Uid:1e69bd0a-b324-4064-9086-3d6aa0d23b51,Namespace:calico-system,Attempt:0,}" Nov 1 00:19:07.730017 env[1586]: time="2025-11-01T00:19:07.729990445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8dcbbd64-p85vf,Uid:ab7373cc-dd84-417d-8edc-59fbf979f4b4,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:19:07.734210 env[1586]: time="2025-11-01T00:19:07.734180281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8dcbbd64-qwkg7,Uid:da0e9dac-d5af-4669-8132-3ec847bb81ba,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:19:07.812736 env[1586]: time="2025-11-01T00:19:07.812690571Z" level=error msg="Failed to destroy network for sandbox \"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.813391 env[1586]: time="2025-11-01T00:19:07.813353650Z" level=error msg="encountered an error cleaning up failed sandbox \"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.813513 env[1586]: time="2025-11-01T00:19:07.813488970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69dc8bd568-8tbvd,Uid:baca364e-ee4d-4d71-abe5-6d4d260656e2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.814140 kubelet[2677]: E1101 00:19:07.813788 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.814140 kubelet[2677]: E1101 00:19:07.813837 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69dc8bd568-8tbvd" Nov 1 00:19:07.814140 kubelet[2677]: E1101 00:19:07.813858 2677 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69dc8bd568-8tbvd" Nov 1 00:19:07.815853 kubelet[2677]: E1101 00:19:07.813910 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-69dc8bd568-8tbvd_calico-system(baca364e-ee4d-4d71-abe5-6d4d260656e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-69dc8bd568-8tbvd_calico-system(baca364e-ee4d-4d71-abe5-6d4d260656e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69dc8bd568-8tbvd" podUID="baca364e-ee4d-4d71-abe5-6d4d260656e2" Nov 1 00:19:07.913322 env[1586]: time="2025-11-01T00:19:07.912396642Z" level=error msg="Failed to destroy network for sandbox \"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.913807 env[1586]: time="2025-11-01T00:19:07.913772640Z" level=error msg="encountered an error cleaning up failed sandbox \"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.913936 env[1586]: time="2025-11-01T00:19:07.913909920Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pw8c5,Uid:1e69bd0a-b324-4064-9086-3d6aa0d23b51,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.915122 kubelet[2677]: E1101 00:19:07.914186 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.915122 kubelet[2677]: E1101 00:19:07.914247 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-pw8c5" Nov 1 00:19:07.915122 kubelet[2677]: E1101 00:19:07.914264 2677 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-pw8c5" Nov 1 00:19:07.915264 kubelet[2677]: E1101 00:19:07.914314 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-pw8c5_calico-system(1e69bd0a-b324-4064-9086-3d6aa0d23b51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-pw8c5_calico-system(1e69bd0a-b324-4064-9086-3d6aa0d23b51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-pw8c5" podUID="1e69bd0a-b324-4064-9086-3d6aa0d23b51" Nov 1 00:19:07.920615 env[1586]: time="2025-11-01T00:19:07.920579714Z" level=error msg="Failed to destroy network for sandbox \"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.921011 env[1586]: time="2025-11-01T00:19:07.920981074Z" level=error msg="encountered an error cleaning up failed sandbox \"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.921122 env[1586]: time="2025-11-01T00:19:07.921097034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8dcbbd64-qwkg7,Uid:da0e9dac-d5af-4669-8132-3ec847bb81ba,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.921410 kubelet[2677]: E1101 00:19:07.921378 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.921481 kubelet[2677]: E1101 00:19:07.921425 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" Nov 1 00:19:07.921481 kubelet[2677]: E1101 00:19:07.921446 2677 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" Nov 1 00:19:07.921575 kubelet[2677]: E1101 00:19:07.921484 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c8dcbbd64-qwkg7_calico-apiserver(da0e9dac-d5af-4669-8132-3ec847bb81ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c8dcbbd64-qwkg7_calico-apiserver(da0e9dac-d5af-4669-8132-3ec847bb81ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" podUID="da0e9dac-d5af-4669-8132-3ec847bb81ba" Nov 1 00:19:07.922863 env[1586]: time="2025-11-01T00:19:07.921797393Z" level=error msg="Failed to destroy network for sandbox \"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.922863 env[1586]: time="2025-11-01T00:19:07.922126073Z" level=error msg="encountered an error cleaning up failed sandbox \"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.922863 env[1586]: time="2025-11-01T00:19:07.922182513Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86c5674785-bs7n8,Uid:57cd90f3-35a2-40bb-93fb-693c3ffcd73d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.923084 kubelet[2677]: E1101 00:19:07.922959 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.923084 kubelet[2677]: E1101 00:19:07.922993 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" Nov 1 00:19:07.923084 kubelet[2677]: E1101 00:19:07.923013 2677 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" Nov 1 00:19:07.923173 kubelet[2677]: E1101 00:19:07.923055 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86c5674785-bs7n8_calico-system(57cd90f3-35a2-40bb-93fb-693c3ffcd73d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86c5674785-bs7n8_calico-system(57cd90f3-35a2-40bb-93fb-693c3ffcd73d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" podUID="57cd90f3-35a2-40bb-93fb-693c3ffcd73d" Nov 1 00:19:07.937153 env[1586]: time="2025-11-01T00:19:07.937114820Z" level=error msg="Failed to destroy network for sandbox \"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.937545 env[1586]: time="2025-11-01T00:19:07.937515499Z" level=error msg="encountered an error cleaning up failed sandbox \"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.937657 env[1586]: time="2025-11-01T00:19:07.937632699Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8dcbbd64-p85vf,Uid:ab7373cc-dd84-417d-8edc-59fbf979f4b4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.937918 kubelet[2677]: E1101 00:19:07.937878 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:07.937982 kubelet[2677]: E1101 00:19:07.937926 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" Nov 1 00:19:07.937982 kubelet[2677]: E1101 00:19:07.937940 2677 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" Nov 1 00:19:07.938038 kubelet[2677]: E1101 00:19:07.937992 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c8dcbbd64-p85vf_calico-apiserver(ab7373cc-dd84-417d-8edc-59fbf979f4b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c8dcbbd64-p85vf_calico-apiserver(ab7373cc-dd84-417d-8edc-59fbf979f4b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" podUID="ab7373cc-dd84-417d-8edc-59fbf979f4b4" Nov 1 00:19:08.010236 env[1586]: time="2025-11-01T00:19:08.010199194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q8677,Uid:a4048da5-d286-44c8-9ec0-180e591b9eec,Namespace:kube-system,Attempt:0,}" Nov 1 00:19:08.010543 env[1586]: time="2025-11-01T00:19:08.010512274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-87vvp,Uid:a7634ab8-ff62-48dd-9eee-61be2b01d0bb,Namespace:kube-system,Attempt:0,}" Nov 1 00:19:08.115788 env[1586]: time="2025-11-01T00:19:08.115717781Z" level=error msg="Failed to destroy network for sandbox \"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:08.116094 env[1586]: time="2025-11-01T00:19:08.116059581Z" level=error msg="encountered an error cleaning up failed sandbox \"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:08.116259 env[1586]: time="2025-11-01T00:19:08.116115101Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q8677,Uid:a4048da5-d286-44c8-9ec0-180e591b9eec,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:08.116481 kubelet[2677]: E1101 00:19:08.116327 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:08.116481 kubelet[2677]: E1101 00:19:08.116394 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q8677" Nov 1 00:19:08.116481 kubelet[2677]: E1101 00:19:08.116412 2677 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-q8677" Nov 1 00:19:08.116669 kubelet[2677]: E1101 00:19:08.116462 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-q8677_kube-system(a4048da5-d286-44c8-9ec0-180e591b9eec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-q8677_kube-system(a4048da5-d286-44c8-9ec0-180e591b9eec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-q8677" podUID="a4048da5-d286-44c8-9ec0-180e591b9eec" Nov 1 00:19:08.130427 env[1586]: time="2025-11-01T00:19:08.130384128Z" level=error msg="Failed to destroy network for sandbox \"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:08.130865 env[1586]: time="2025-11-01T00:19:08.130834368Z" level=error msg="encountered an error cleaning up failed sandbox \"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:08.130993 env[1586]: time="2025-11-01T00:19:08.130964928Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-87vvp,Uid:a7634ab8-ff62-48dd-9eee-61be2b01d0bb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:08.131426 kubelet[2677]: E1101 00:19:08.131251 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:08.131426 kubelet[2677]: E1101 00:19:08.131315 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-87vvp" Nov 1 00:19:08.131426 kubelet[2677]: E1101 00:19:08.131333 2677 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-87vvp" Nov 1 00:19:08.131564 kubelet[2677]: E1101 00:19:08.131373 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-87vvp_kube-system(a7634ab8-ff62-48dd-9eee-61be2b01d0bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-87vvp_kube-system(a7634ab8-ff62-48dd-9eee-61be2b01d0bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-87vvp" podUID="a7634ab8-ff62-48dd-9eee-61be2b01d0bb" Nov 1 00:19:08.551836 kubelet[2677]: I1101 00:19:08.551810 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" Nov 1 00:19:08.554501 env[1586]: time="2025-11-01T00:19:08.554468594Z" level=info msg="StopPodSandbox for \"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a\"" Nov 1 00:19:08.554744 kubelet[2677]: I1101 00:19:08.554726 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" Nov 1 00:19:08.555507 env[1586]: time="2025-11-01T00:19:08.555225873Z" level=info msg="StopPodSandbox for \"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696\"" Nov 1 00:19:08.570422 env[1586]: time="2025-11-01T00:19:08.568777421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:19:08.573938 kubelet[2677]: I1101 00:19:08.573520 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" Nov 1 00:19:08.574299 env[1586]: time="2025-11-01T00:19:08.574260297Z" level=info msg="StopPodSandbox for \"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857\"" Nov 1 00:19:08.575312 kubelet[2677]: I1101 00:19:08.574997 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" Nov 1 00:19:08.575573 env[1586]: time="2025-11-01T00:19:08.575539215Z" level=info msg="StopPodSandbox for \"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455\"" Nov 1 00:19:08.576769 kubelet[2677]: I1101 00:19:08.576494 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" Nov 1 00:19:08.576886 env[1586]: time="2025-11-01T00:19:08.576855454Z" level=info msg="StopPodSandbox for \"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9\"" Nov 1 00:19:08.580441 kubelet[2677]: I1101 00:19:08.579136 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" Nov 1 00:19:08.582217 env[1586]: time="2025-11-01T00:19:08.581007851Z" level=info msg="StopPodSandbox for \"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24\"" Nov 1 00:19:08.583641 kubelet[2677]: I1101 00:19:08.583621 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" Nov 1 00:19:08.585596 env[1586]: time="2025-11-01T00:19:08.585564287Z" level=info msg="StopPodSandbox for \"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15\"" Nov 1 00:19:08.586457 kubelet[2677]: I1101 00:19:08.586441 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" Nov 1 00:19:08.587412 env[1586]: time="2025-11-01T00:19:08.587358525Z" level=info msg="StopPodSandbox for \"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a\"" Nov 1 00:19:08.600886 env[1586]: time="2025-11-01T00:19:08.600842073Z" level=error msg="StopPodSandbox for \"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696\" failed" error="failed to destroy network for sandbox \"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:08.603617 kubelet[2677]: E1101 00:19:08.603556 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" Nov 1 00:19:08.604220 kubelet[2677]: E1101 00:19:08.603857 2677 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696"} Nov 1 00:19:08.604220 kubelet[2677]: E1101 00:19:08.603919 2677 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1e69bd0a-b324-4064-9086-3d6aa0d23b51\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:19:08.604220 kubelet[2677]: E1101 00:19:08.603939 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1e69bd0a-b324-4064-9086-3d6aa0d23b51\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-pw8c5" podUID="1e69bd0a-b324-4064-9086-3d6aa0d23b51" Nov 1 00:19:08.643192 env[1586]: time="2025-11-01T00:19:08.643134676Z" level=error msg="StopPodSandbox for \"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a\" failed" error="failed to destroy network for sandbox \"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:08.643639 kubelet[2677]: E1101 00:19:08.643470 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" Nov 1 00:19:08.643639 kubelet[2677]: E1101 00:19:08.643514 2677 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a"} Nov 1 00:19:08.643639 kubelet[2677]: E1101 00:19:08.643560 2677 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"baca364e-ee4d-4d71-abe5-6d4d260656e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:19:08.643639 kubelet[2677]: E1101 00:19:08.643588 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"baca364e-ee4d-4d71-abe5-6d4d260656e2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69dc8bd568-8tbvd" podUID="baca364e-ee4d-4d71-abe5-6d4d260656e2" Nov 1 00:19:08.680493 env[1586]: time="2025-11-01T00:19:08.680427243Z" level=error msg="StopPodSandbox for \"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9\" failed" error="failed to destroy network for sandbox \"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:08.680936 kubelet[2677]: E1101 00:19:08.680781 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" Nov 1 00:19:08.680936 kubelet[2677]: E1101 00:19:08.680839 2677 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9"} Nov 1 00:19:08.680936 kubelet[2677]: E1101 00:19:08.680871 2677 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57cd90f3-35a2-40bb-93fb-693c3ffcd73d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:19:08.680936 kubelet[2677]: E1101 00:19:08.680902 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57cd90f3-35a2-40bb-93fb-693c3ffcd73d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" podUID="57cd90f3-35a2-40bb-93fb-693c3ffcd73d" Nov 1 00:19:08.683224 env[1586]: time="2025-11-01T00:19:08.683169320Z" level=error msg="StopPodSandbox for \"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857\" failed" error="failed to destroy network for sandbox \"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:08.683568 kubelet[2677]: E1101 00:19:08.683448 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" Nov 1 00:19:08.683568 kubelet[2677]: E1101 00:19:08.683486 2677 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857"} Nov 1 00:19:08.683568 kubelet[2677]: E1101 00:19:08.683521 2677 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e50a05e-0803-4e20-bd2b-ccf8c9d67c23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:19:08.683568 kubelet[2677]: E1101 00:19:08.683540 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e50a05e-0803-4e20-bd2b-ccf8c9d67c23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:19:08.690732 env[1586]: time="2025-11-01T00:19:08.690685834Z" level=error msg="StopPodSandbox for \"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455\" failed" error="failed to destroy network for sandbox \"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:08.691060 kubelet[2677]: E1101 00:19:08.690935 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" Nov 1 00:19:08.691060 kubelet[2677]: E1101 00:19:08.690970 2677 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455"} Nov 1 00:19:08.691060 kubelet[2677]: E1101 00:19:08.690994 2677 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a4048da5-d286-44c8-9ec0-180e591b9eec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:19:08.691060 kubelet[2677]: E1101 00:19:08.691027 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a4048da5-d286-44c8-9ec0-180e591b9eec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-q8677" podUID="a4048da5-d286-44c8-9ec0-180e591b9eec" Nov 1 00:19:08.701261 env[1586]: time="2025-11-01T00:19:08.701212065Z" level=error msg="StopPodSandbox for \"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a\" failed" error="failed to destroy network for sandbox \"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:08.701628 kubelet[2677]: E1101 00:19:08.701502 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" Nov 1 00:19:08.701628 kubelet[2677]: E1101 00:19:08.701542 2677 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a"} Nov 1 00:19:08.701628 kubelet[2677]: E1101 00:19:08.701580 2677 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a7634ab8-ff62-48dd-9eee-61be2b01d0bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:19:08.701628 kubelet[2677]: E1101 00:19:08.701599 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a7634ab8-ff62-48dd-9eee-61be2b01d0bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-87vvp" podUID="a7634ab8-ff62-48dd-9eee-61be2b01d0bb" Nov 1 00:19:08.714631 env[1586]: time="2025-11-01T00:19:08.714566493Z" level=error msg="StopPodSandbox for \"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24\" failed" error="failed to destroy network for sandbox \"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:08.714981 kubelet[2677]: E1101 00:19:08.714862 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" Nov 1 00:19:08.714981 kubelet[2677]: E1101 00:19:08.714898 2677 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24"} Nov 1 00:19:08.714981 kubelet[2677]: E1101 00:19:08.714938 2677 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da0e9dac-d5af-4669-8132-3ec847bb81ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:19:08.714981 kubelet[2677]: E1101 00:19:08.714957 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da0e9dac-d5af-4669-8132-3ec847bb81ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" podUID="da0e9dac-d5af-4669-8132-3ec847bb81ba" Nov 1 00:19:08.718345 env[1586]: time="2025-11-01T00:19:08.718259689Z" level=error msg="StopPodSandbox for \"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15\" failed" error="failed to destroy network for sandbox \"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:19:08.718647 kubelet[2677]: E1101 00:19:08.718520 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" Nov 1 00:19:08.718647 kubelet[2677]: E1101 00:19:08.718564 2677 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15"} Nov 1 00:19:08.718647 kubelet[2677]: E1101 00:19:08.718589 2677 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ab7373cc-dd84-417d-8edc-59fbf979f4b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:19:08.718647 kubelet[2677]: E1101 00:19:08.718605 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ab7373cc-dd84-417d-8edc-59fbf979f4b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" podUID="ab7373cc-dd84-417d-8edc-59fbf979f4b4" Nov 1 00:19:09.176511 kubelet[2677]: I1101 00:19:09.176431 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:19:09.473000 audit[3821]: NETFILTER_CFG table=filter:106 family=2 entries=21 op=nft_register_rule pid=3821 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:19:09.480359 kernel: kauditd_printk_skb: 20 callbacks suppressed Nov 1 00:19:09.480439 kernel: audit: type=1325 audit(1761956349.473:314): table=filter:106 family=2 entries=21 op=nft_register_rule pid=3821 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:19:09.473000 audit[3821]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd1148da0 a2=0 a3=1 items=0 ppid=2782 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:09.522577 kernel: audit: type=1300 audit(1761956349.473:314): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd1148da0 a2=0 a3=1 items=0 ppid=2782 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:09.473000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:19:09.536831 kernel: audit: type=1327 audit(1761956349.473:314): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:19:09.537000 audit[3821]: NETFILTER_CFG table=nat:107 family=2 entries=19 op=nft_register_chain pid=3821 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:19:09.537000 audit[3821]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffd1148da0 a2=0 a3=1 items=0 ppid=2782 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:09.581132 kernel: audit: type=1325 audit(1761956349.537:315): table=nat:107 family=2 entries=19 op=nft_register_chain pid=3821 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:19:09.581247 kernel: audit: type=1300 audit(1761956349.537:315): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffd1148da0 a2=0 a3=1 items=0 ppid=2782 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:09.537000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:19:09.596598 kernel: audit: type=1327 audit(1761956349.537:315): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:19:13.625364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2028283919.mount: Deactivated successfully. Nov 1 00:19:13.663740 env[1586]: time="2025-11-01T00:19:13.663692373Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:19:13.668944 env[1586]: time="2025-11-01T00:19:13.668914728Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:19:13.672331 env[1586]: time="2025-11-01T00:19:13.672271926Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:19:13.675913 env[1586]: time="2025-11-01T00:19:13.675879283Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:19:13.676254 env[1586]: time="2025-11-01T00:19:13.676226402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 1 00:19:13.693502 env[1586]: time="2025-11-01T00:19:13.693454308Z" level=info msg="CreateContainer within sandbox \"a0f9d2bcee92d2e0449055fb16895fd90990587bc10c9bc476665b557e27d73e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:19:13.718254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1995439190.mount: Deactivated successfully. Nov 1 00:19:13.732850 env[1586]: time="2025-11-01T00:19:13.732805196Z" level=info msg="CreateContainer within sandbox \"a0f9d2bcee92d2e0449055fb16895fd90990587bc10c9bc476665b557e27d73e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"db6f76a2b7ab6ed2fb0e97e0b262e36f004b1750c5d73343a16030568d48efc1\"" Nov 1 00:19:13.734222 env[1586]: time="2025-11-01T00:19:13.733590675Z" level=info msg="StartContainer for \"db6f76a2b7ab6ed2fb0e97e0b262e36f004b1750c5d73343a16030568d48efc1\"" Nov 1 00:19:13.805930 env[1586]: time="2025-11-01T00:19:13.805886576Z" level=info msg="StartContainer for \"db6f76a2b7ab6ed2fb0e97e0b262e36f004b1750c5d73343a16030568d48efc1\" returns successfully" Nov 1 00:19:14.635872 kubelet[2677]: I1101 00:19:14.635802 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zth44" podStartSLOduration=2.020585837 podStartE2EDuration="17.634306863s" podCreationTimestamp="2025-11-01 00:18:57 +0000 UTC" firstStartedPulling="2025-11-01 00:18:58.063542016 +0000 UTC m=+27.873859090" lastFinishedPulling="2025-11-01 00:19:13.677263082 +0000 UTC m=+43.487580116" observedRunningTime="2025-11-01 00:19:14.633060624 +0000 UTC m=+44.443377698" watchObservedRunningTime="2025-11-01 00:19:14.634306863 +0000 UTC m=+44.444623937" Nov 1 00:19:14.726791 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:19:14.726923 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:19:14.842237 env[1586]: time="2025-11-01T00:19:14.842197694Z" level=info msg="StopPodSandbox for \"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a\"" Nov 1 00:19:15.003824 env[1586]: 2025-11-01 00:19:14.933 [INFO][3883] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" Nov 1 00:19:15.003824 env[1586]: 2025-11-01 00:19:14.933 [INFO][3883] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" iface="eth0" netns="/var/run/netns/cni-1f06eace-b701-01da-24fc-97d8e9ef783a" Nov 1 00:19:15.003824 env[1586]: 2025-11-01 00:19:14.934 [INFO][3883] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" iface="eth0" netns="/var/run/netns/cni-1f06eace-b701-01da-24fc-97d8e9ef783a" Nov 1 00:19:15.003824 env[1586]: 2025-11-01 00:19:14.934 [INFO][3883] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" iface="eth0" netns="/var/run/netns/cni-1f06eace-b701-01da-24fc-97d8e9ef783a" Nov 1 00:19:15.003824 env[1586]: 2025-11-01 00:19:14.934 [INFO][3883] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" Nov 1 00:19:15.003824 env[1586]: 2025-11-01 00:19:14.934 [INFO][3883] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" Nov 1 00:19:15.003824 env[1586]: 2025-11-01 00:19:14.973 [INFO][3891] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" HandleID="k8s-pod-network.6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" Workload="ci--3510.3.8--n--c51a7922c9-k8s-whisker--69dc8bd568--8tbvd-eth0" Nov 1 00:19:15.003824 env[1586]: 2025-11-01 00:19:14.973 [INFO][3891] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:15.003824 env[1586]: 2025-11-01 00:19:14.973 [INFO][3891] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:15.003824 env[1586]: 2025-11-01 00:19:14.985 [WARNING][3891] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" HandleID="k8s-pod-network.6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" Workload="ci--3510.3.8--n--c51a7922c9-k8s-whisker--69dc8bd568--8tbvd-eth0" Nov 1 00:19:15.003824 env[1586]: 2025-11-01 00:19:14.985 [INFO][3891] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" HandleID="k8s-pod-network.6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" Workload="ci--3510.3.8--n--c51a7922c9-k8s-whisker--69dc8bd568--8tbvd-eth0" Nov 1 00:19:15.003824 env[1586]: 2025-11-01 00:19:14.990 [INFO][3891] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:15.003824 env[1586]: 2025-11-01 00:19:14.994 [INFO][3883] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" Nov 1 00:19:15.003824 env[1586]: time="2025-11-01T00:19:15.001938885Z" level=info msg="TearDown network for sandbox \"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a\" successfully" Nov 1 00:19:15.003824 env[1586]: time="2025-11-01T00:19:15.001978605Z" level=info msg="StopPodSandbox for \"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a\" returns successfully" Nov 1 00:19:15.000439 systemd[1]: run-netns-cni\x2d1f06eace\x2db701\x2d01da\x2d24fc\x2d97d8e9ef783a.mount: Deactivated successfully. Nov 1 00:19:15.154519 kubelet[2677]: I1101 00:19:15.153804 2677 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/baca364e-ee4d-4d71-abe5-6d4d260656e2-whisker-backend-key-pair\") pod \"baca364e-ee4d-4d71-abe5-6d4d260656e2\" (UID: \"baca364e-ee4d-4d71-abe5-6d4d260656e2\") " Nov 1 00:19:15.154519 kubelet[2677]: I1101 00:19:15.154105 2677 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/baca364e-ee4d-4d71-abe5-6d4d260656e2-whisker-ca-bundle\") pod \"baca364e-ee4d-4d71-abe5-6d4d260656e2\" (UID: \"baca364e-ee4d-4d71-abe5-6d4d260656e2\") " Nov 1 00:19:15.154519 kubelet[2677]: I1101 00:19:15.154150 2677 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zzld\" (UniqueName: \"kubernetes.io/projected/baca364e-ee4d-4d71-abe5-6d4d260656e2-kube-api-access-4zzld\") pod \"baca364e-ee4d-4d71-abe5-6d4d260656e2\" (UID: \"baca364e-ee4d-4d71-abe5-6d4d260656e2\") " Nov 1 00:19:15.155173 kubelet[2677]: I1101 00:19:15.155143 2677 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/baca364e-ee4d-4d71-abe5-6d4d260656e2-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "baca364e-ee4d-4d71-abe5-6d4d260656e2" (UID: "baca364e-ee4d-4d71-abe5-6d4d260656e2"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:19:15.187446 systemd[1]: var-lib-kubelet-pods-baca364e\x2dee4d\x2d4d71\x2dabe5\x2d6d4d260656e2-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:19:15.189186 kubelet[2677]: I1101 00:19:15.188994 2677 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baca364e-ee4d-4d71-abe5-6d4d260656e2-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "baca364e-ee4d-4d71-abe5-6d4d260656e2" (UID: "baca364e-ee4d-4d71-abe5-6d4d260656e2"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:19:15.189358 kubelet[2677]: I1101 00:19:15.189340 2677 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baca364e-ee4d-4d71-abe5-6d4d260656e2-kube-api-access-4zzld" (OuterVolumeSpecName: "kube-api-access-4zzld") pod "baca364e-ee4d-4d71-abe5-6d4d260656e2" (UID: "baca364e-ee4d-4d71-abe5-6d4d260656e2"). InnerVolumeSpecName "kube-api-access-4zzld". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:19:15.190189 systemd[1]: var-lib-kubelet-pods-baca364e\x2dee4d\x2d4d71\x2dabe5\x2d6d4d260656e2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4zzld.mount: Deactivated successfully. Nov 1 00:19:15.254571 kubelet[2677]: I1101 00:19:15.254463 2677 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/baca364e-ee4d-4d71-abe5-6d4d260656e2-whisker-ca-bundle\") on node \"ci-3510.3.8-n-c51a7922c9\" DevicePath \"\"" Nov 1 00:19:15.254720 kubelet[2677]: I1101 00:19:15.254707 2677 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4zzld\" (UniqueName: \"kubernetes.io/projected/baca364e-ee4d-4d71-abe5-6d4d260656e2-kube-api-access-4zzld\") on node \"ci-3510.3.8-n-c51a7922c9\" DevicePath \"\"" Nov 1 00:19:15.255415 kubelet[2677]: I1101 00:19:15.255342 2677 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/baca364e-ee4d-4d71-abe5-6d4d260656e2-whisker-backend-key-pair\") on node \"ci-3510.3.8-n-c51a7922c9\" DevicePath \"\"" Nov 1 00:19:15.859096 kubelet[2677]: I1101 00:19:15.859056 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqvq8\" (UniqueName: \"kubernetes.io/projected/7f2d9b8c-e77a-4876-aeb0-3b35b890f02a-kube-api-access-rqvq8\") pod \"whisker-6b964bb46-4rknd\" (UID: \"7f2d9b8c-e77a-4876-aeb0-3b35b890f02a\") " pod="calico-system/whisker-6b964bb46-4rknd" Nov 1 00:19:15.859584 kubelet[2677]: I1101 00:19:15.859566 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7f2d9b8c-e77a-4876-aeb0-3b35b890f02a-whisker-backend-key-pair\") pod \"whisker-6b964bb46-4rknd\" (UID: \"7f2d9b8c-e77a-4876-aeb0-3b35b890f02a\") " pod="calico-system/whisker-6b964bb46-4rknd" Nov 1 00:19:15.859682 kubelet[2677]: I1101 00:19:15.859668 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f2d9b8c-e77a-4876-aeb0-3b35b890f02a-whisker-ca-bundle\") pod \"whisker-6b964bb46-4rknd\" (UID: \"7f2d9b8c-e77a-4876-aeb0-3b35b890f02a\") " pod="calico-system/whisker-6b964bb46-4rknd" Nov 1 00:19:16.000769 env[1586]: time="2025-11-01T00:19:16.000716687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b964bb46-4rknd,Uid:7f2d9b8c-e77a-4876-aeb0-3b35b890f02a,Namespace:calico-system,Attempt:0,}" Nov 1 00:19:16.160000 audit[3979]: AVC avc: denied { write } for pid=3979 comm="tee" name="fd" dev="proc" ino=26026 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:19:16.183000 audit[3992]: AVC avc: denied { write } for pid=3992 comm="tee" name="fd" dev="proc" ino=25118 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:19:16.223103 kernel: audit: type=1400 audit(1761956356.160:316): avc: denied { write } for pid=3979 comm="tee" name="fd" dev="proc" ino=26026 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:19:16.223217 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:19:16.223251 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali032f776ccbd: link becomes ready Nov 1 00:19:16.223270 kernel: audit: type=1400 audit(1761956356.183:317): avc: denied { write } for pid=3992 comm="tee" name="fd" dev="proc" ino=25118 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:19:16.226743 systemd-networkd[1788]: cali032f776ccbd: Link UP Nov 1 00:19:16.226887 systemd-networkd[1788]: cali032f776ccbd: Gained carrier Nov 1 00:19:16.183000 audit[3992]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffb4c07b8 a2=241 a3=1b6 items=1 ppid=3950 pid=3992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.272318 kernel: audit: type=1300 audit(1761956356.183:317): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffb4c07b8 a2=241 a3=1b6 items=1 ppid=3950 pid=3992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.183000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Nov 1 00:19:16.289834 kernel: audit: type=1307 audit(1761956356.183:317): cwd="/etc/service/enabled/node-status-reporter/log" Nov 1 00:19:16.183000 audit: PATH item=0 name="/dev/fd/63" inode=25115 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:19:16.308070 env[1586]: 2025-11-01 00:19:16.056 [INFO][3912] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:19:16.308070 env[1586]: 2025-11-01 00:19:16.068 [INFO][3912] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--c51a7922c9-k8s-whisker--6b964bb46--4rknd-eth0 whisker-6b964bb46- calico-system 7f2d9b8c-e77a-4876-aeb0-3b35b890f02a 930 0 2025-11-01 00:19:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6b964bb46 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-3510.3.8-n-c51a7922c9 whisker-6b964bb46-4rknd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali032f776ccbd [] [] }} ContainerID="44f8275caf248f4686db42fe4c9355f7bc6b9905bbe3d7a4b6fe029716dafdf6" Namespace="calico-system" Pod="whisker-6b964bb46-4rknd" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-whisker--6b964bb46--4rknd-" Nov 1 00:19:16.308070 env[1586]: 2025-11-01 00:19:16.068 [INFO][3912] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44f8275caf248f4686db42fe4c9355f7bc6b9905bbe3d7a4b6fe029716dafdf6" Namespace="calico-system" Pod="whisker-6b964bb46-4rknd" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-whisker--6b964bb46--4rknd-eth0" Nov 1 00:19:16.308070 env[1586]: 2025-11-01 00:19:16.090 [INFO][3924] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44f8275caf248f4686db42fe4c9355f7bc6b9905bbe3d7a4b6fe029716dafdf6" HandleID="k8s-pod-network.44f8275caf248f4686db42fe4c9355f7bc6b9905bbe3d7a4b6fe029716dafdf6" Workload="ci--3510.3.8--n--c51a7922c9-k8s-whisker--6b964bb46--4rknd-eth0" Nov 1 00:19:16.308070 env[1586]: 2025-11-01 00:19:16.090 [INFO][3924] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="44f8275caf248f4686db42fe4c9355f7bc6b9905bbe3d7a4b6fe029716dafdf6" HandleID="k8s-pod-network.44f8275caf248f4686db42fe4c9355f7bc6b9905bbe3d7a4b6fe029716dafdf6" Workload="ci--3510.3.8--n--c51a7922c9-k8s-whisker--6b964bb46--4rknd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3660), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-c51a7922c9", "pod":"whisker-6b964bb46-4rknd", "timestamp":"2025-11-01 00:19:16.090091016 +0000 UTC"}, Hostname:"ci-3510.3.8-n-c51a7922c9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:19:16.308070 env[1586]: 2025-11-01 00:19:16.090 [INFO][3924] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:16.308070 env[1586]: 2025-11-01 00:19:16.090 [INFO][3924] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:16.308070 env[1586]: 2025-11-01 00:19:16.090 [INFO][3924] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-c51a7922c9' Nov 1 00:19:16.308070 env[1586]: 2025-11-01 00:19:16.103 [INFO][3924] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.44f8275caf248f4686db42fe4c9355f7bc6b9905bbe3d7a4b6fe029716dafdf6" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:16.308070 env[1586]: 2025-11-01 00:19:16.107 [INFO][3924] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:16.308070 env[1586]: 2025-11-01 00:19:16.112 [INFO][3924] ipam/ipam.go 511: Trying affinity for 192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:16.308070 env[1586]: 2025-11-01 00:19:16.115 [INFO][3924] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:16.308070 env[1586]: 2025-11-01 00:19:16.116 [INFO][3924] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:16.308070 env[1586]: 2025-11-01 00:19:16.116 [INFO][3924] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.0/26 handle="k8s-pod-network.44f8275caf248f4686db42fe4c9355f7bc6b9905bbe3d7a4b6fe029716dafdf6" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:16.308070 env[1586]: 2025-11-01 00:19:16.119 [INFO][3924] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.44f8275caf248f4686db42fe4c9355f7bc6b9905bbe3d7a4b6fe029716dafdf6 Nov 1 00:19:16.308070 env[1586]: 2025-11-01 00:19:16.124 [INFO][3924] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.0/26 handle="k8s-pod-network.44f8275caf248f4686db42fe4c9355f7bc6b9905bbe3d7a4b6fe029716dafdf6" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:16.308070 env[1586]: 2025-11-01 00:19:16.133 [INFO][3924] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.1/26] block=192.168.15.0/26 handle="k8s-pod-network.44f8275caf248f4686db42fe4c9355f7bc6b9905bbe3d7a4b6fe029716dafdf6" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:16.308070 env[1586]: 2025-11-01 00:19:16.133 [INFO][3924] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.1/26] handle="k8s-pod-network.44f8275caf248f4686db42fe4c9355f7bc6b9905bbe3d7a4b6fe029716dafdf6" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:16.308070 env[1586]: 2025-11-01 00:19:16.133 [INFO][3924] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:16.308070 env[1586]: 2025-11-01 00:19:16.133 [INFO][3924] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.1/26] IPv6=[] ContainerID="44f8275caf248f4686db42fe4c9355f7bc6b9905bbe3d7a4b6fe029716dafdf6" HandleID="k8s-pod-network.44f8275caf248f4686db42fe4c9355f7bc6b9905bbe3d7a4b6fe029716dafdf6" Workload="ci--3510.3.8--n--c51a7922c9-k8s-whisker--6b964bb46--4rknd-eth0" Nov 1 00:19:16.308629 env[1586]: 2025-11-01 00:19:16.134 [INFO][3912] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44f8275caf248f4686db42fe4c9355f7bc6b9905bbe3d7a4b6fe029716dafdf6" Namespace="calico-system" Pod="whisker-6b964bb46-4rknd" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-whisker--6b964bb46--4rknd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-whisker--6b964bb46--4rknd-eth0", GenerateName:"whisker-6b964bb46-", Namespace:"calico-system", SelfLink:"", UID:"7f2d9b8c-e77a-4876-aeb0-3b35b890f02a", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 19, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b964bb46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"", Pod:"whisker-6b964bb46-4rknd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.15.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali032f776ccbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:16.308629 env[1586]: 2025-11-01 00:19:16.135 [INFO][3912] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.1/32] ContainerID="44f8275caf248f4686db42fe4c9355f7bc6b9905bbe3d7a4b6fe029716dafdf6" Namespace="calico-system" Pod="whisker-6b964bb46-4rknd" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-whisker--6b964bb46--4rknd-eth0" Nov 1 00:19:16.308629 env[1586]: 2025-11-01 00:19:16.135 [INFO][3912] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali032f776ccbd ContainerID="44f8275caf248f4686db42fe4c9355f7bc6b9905bbe3d7a4b6fe029716dafdf6" Namespace="calico-system" Pod="whisker-6b964bb46-4rknd" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-whisker--6b964bb46--4rknd-eth0" Nov 1 00:19:16.308629 env[1586]: 2025-11-01 00:19:16.225 [INFO][3912] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44f8275caf248f4686db42fe4c9355f7bc6b9905bbe3d7a4b6fe029716dafdf6" Namespace="calico-system" Pod="whisker-6b964bb46-4rknd" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-whisker--6b964bb46--4rknd-eth0" Nov 1 00:19:16.308629 env[1586]: 2025-11-01 00:19:16.238 [INFO][3912] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44f8275caf248f4686db42fe4c9355f7bc6b9905bbe3d7a4b6fe029716dafdf6" Namespace="calico-system" Pod="whisker-6b964bb46-4rknd" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-whisker--6b964bb46--4rknd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-whisker--6b964bb46--4rknd-eth0", GenerateName:"whisker-6b964bb46-", Namespace:"calico-system", SelfLink:"", UID:"7f2d9b8c-e77a-4876-aeb0-3b35b890f02a", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 19, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6b964bb46", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"44f8275caf248f4686db42fe4c9355f7bc6b9905bbe3d7a4b6fe029716dafdf6", Pod:"whisker-6b964bb46-4rknd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.15.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali032f776ccbd", MAC:"0a:1e:2a:81:7d:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:16.308629 env[1586]: 2025-11-01 00:19:16.295 [INFO][3912] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44f8275caf248f4686db42fe4c9355f7bc6b9905bbe3d7a4b6fe029716dafdf6" Namespace="calico-system" Pod="whisker-6b964bb46-4rknd" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-whisker--6b964bb46--4rknd-eth0" Nov 1 00:19:16.183000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:19:16.337351 kernel: audit: type=1302 audit(1761956356.183:317): item=0 name="/dev/fd/63" inode=25115 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:19:16.376799 kubelet[2677]: I1101 00:19:16.376747 2677 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baca364e-ee4d-4d71-abe5-6d4d260656e2" path="/var/lib/kubelet/pods/baca364e-ee4d-4d71-abe5-6d4d260656e2/volumes" Nov 1 00:19:16.411330 kernel: audit: type=1327 audit(1761956356.183:317): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:19:16.411446 kernel: audit: type=1300 audit(1761956356.160:316): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd32dc7c9 a2=241 a3=1b6 items=1 ppid=3948 pid=3979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.160000 audit[3979]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd32dc7c9 a2=241 a3=1b6 items=1 ppid=3948 pid=3979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.417687 env[1586]: time="2025-11-01T00:19:16.417600958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:19:16.417687 env[1586]: time="2025-11-01T00:19:16.417647198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:19:16.417687 env[1586]: time="2025-11-01T00:19:16.417657118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:19:16.417908 env[1586]: time="2025-11-01T00:19:16.417789118Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/44f8275caf248f4686db42fe4c9355f7bc6b9905bbe3d7a4b6fe029716dafdf6 pid=4045 runtime=io.containerd.runc.v2 Nov 1 00:19:16.160000 audit: CWD cwd="/etc/service/enabled/cni/log" Nov 1 00:19:16.432138 kernel: audit: type=1307 audit(1761956356.160:316): cwd="/etc/service/enabled/cni/log" Nov 1 00:19:16.160000 audit: PATH item=0 name="/dev/fd/63" inode=26017 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:19:16.453542 kernel: audit: type=1302 audit(1761956356.160:316): item=0 name="/dev/fd/63" inode=26017 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:19:16.160000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:19:16.489256 kernel: audit: type=1327 audit(1761956356.160:316): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:19:16.201000 audit[3995]: AVC avc: denied { write } for pid=3995 comm="tee" name="fd" dev="proc" ino=26062 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:19:16.201000 audit[3995]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd22e37c7 a2=241 a3=1b6 items=1 ppid=3944 pid=3995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.201000 audit: CWD cwd="/etc/service/enabled/confd/log" Nov 1 00:19:16.201000 audit: PATH item=0 name="/dev/fd/63" inode=26027 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:19:16.201000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:19:16.222000 audit[3997]: AVC avc: denied { write } for pid=3997 comm="tee" name="fd" dev="proc" ino=25126 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:19:16.222000 audit[3997]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffffab17c7 a2=241 a3=1b6 items=1 ppid=3946 pid=3997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.222000 audit: CWD cwd="/etc/service/enabled/bird6/log" Nov 1 00:19:16.222000 audit: PATH item=0 name="/dev/fd/63" inode=26028 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:19:16.222000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:19:16.240000 audit[4006]: AVC avc: denied { write } for pid=4006 comm="tee" name="fd" dev="proc" ino=25132 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:19:16.240000 audit[4006]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc54877c7 a2=241 a3=1b6 items=1 ppid=3942 pid=4006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.240000 audit: CWD cwd="/etc/service/enabled/felix/log" Nov 1 00:19:16.240000 audit: PATH item=0 name="/dev/fd/63" inode=26057 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:19:16.240000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:19:16.242000 audit[4008]: AVC avc: denied { write } for pid=4008 comm="tee" name="fd" dev="proc" ino=25136 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:19:16.242000 audit[4008]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc613c7c8 a2=241 a3=1b6 items=1 ppid=3952 pid=4008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.242000 audit: CWD cwd="/etc/service/enabled/bird/log" Nov 1 00:19:16.242000 audit: PATH item=0 name="/dev/fd/63" inode=26059 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:19:16.242000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:19:16.242000 audit[4017]: AVC avc: denied { write } for pid=4017 comm="tee" name="fd" dev="proc" ino=25143 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:19:16.242000 audit[4017]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc4c8b7b7 a2=241 a3=1b6 items=1 ppid=3939 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.242000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Nov 1 00:19:16.242000 audit: PATH item=0 name="/dev/fd/63" inode=25140 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:19:16.242000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:19:16.528978 env[1586]: time="2025-11-01T00:19:16.528925030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b964bb46-4rknd,Uid:7f2d9b8c-e77a-4876-aeb0-3b35b890f02a,Namespace:calico-system,Attempt:0,} returns sandbox id \"44f8275caf248f4686db42fe4c9355f7bc6b9905bbe3d7a4b6fe029716dafdf6\"" Nov 1 00:19:16.530910 env[1586]: time="2025-11-01T00:19:16.530885068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:19:16.648000 audit[4097]: AVC avc: denied { bpf } for pid=4097 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.648000 audit[4097]: AVC avc: denied { bpf } for pid=4097 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.648000 audit[4097]: AVC avc: denied { perfmon } for pid=4097 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.648000 audit[4097]: AVC avc: denied { perfmon } for pid=4097 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.648000 audit[4097]: AVC avc: denied { perfmon } for pid=4097 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.648000 audit[4097]: AVC avc: denied { perfmon } for pid=4097 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.648000 audit[4097]: AVC avc: denied { perfmon } for pid=4097 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.648000 audit[4097]: AVC avc: denied { bpf } for pid=4097 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.648000 audit[4097]: AVC avc: denied { bpf } for pid=4097 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.648000 audit: BPF prog-id=10 op=LOAD Nov 1 00:19:16.648000 audit[4097]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc3a25308 a2=98 a3=ffffc3a252f8 items=0 ppid=3943 pid=4097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.648000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:19:16.649000 audit: BPF prog-id=10 op=UNLOAD Nov 1 00:19:16.649000 audit[4097]: AVC avc: denied { bpf } for pid=4097 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.649000 audit[4097]: AVC avc: denied { bpf } for pid=4097 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.649000 audit[4097]: AVC avc: denied { perfmon } for pid=4097 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.649000 audit[4097]: AVC avc: denied { perfmon } for pid=4097 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.649000 audit[4097]: AVC avc: denied { perfmon } for pid=4097 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.649000 audit[4097]: AVC avc: denied { perfmon } for pid=4097 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.649000 audit[4097]: AVC avc: denied { perfmon } for pid=4097 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.649000 audit[4097]: AVC avc: denied { bpf } for pid=4097 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.649000 audit[4097]: AVC avc: denied { bpf } for pid=4097 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.649000 audit: BPF prog-id=11 op=LOAD Nov 1 00:19:16.649000 audit[4097]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc3a251b8 a2=74 a3=95 items=0 ppid=3943 pid=4097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.649000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:19:16.649000 audit: BPF prog-id=11 op=UNLOAD Nov 1 00:19:16.649000 audit[4097]: AVC avc: denied { bpf } for pid=4097 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.649000 audit[4097]: AVC avc: denied { bpf } for pid=4097 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.649000 audit[4097]: AVC avc: denied { perfmon } for pid=4097 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.649000 audit[4097]: AVC avc: denied { perfmon } for pid=4097 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.649000 audit[4097]: AVC avc: denied { perfmon } for pid=4097 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.649000 audit[4097]: AVC avc: denied { perfmon } for pid=4097 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.649000 audit[4097]: AVC avc: denied { perfmon } for pid=4097 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.649000 audit[4097]: AVC avc: denied { bpf } for pid=4097 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.649000 audit[4097]: AVC avc: denied { bpf } for pid=4097 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.649000 audit: BPF prog-id=12 op=LOAD Nov 1 00:19:16.649000 audit[4097]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc3a251e8 a2=40 a3=ffffc3a25218 items=0 ppid=3943 pid=4097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.649000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:19:16.649000 audit: BPF prog-id=12 op=UNLOAD Nov 1 00:19:16.649000 audit[4097]: AVC avc: denied { perfmon } for pid=4097 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.649000 audit[4097]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=ffffc3a25300 a2=50 a3=0 items=0 ppid=3943 pid=4097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.649000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit: BPF prog-id=13 op=LOAD Nov 1 00:19:16.651000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcdcbfaf8 a2=98 a3=ffffcdcbfae8 items=0 ppid=3943 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.651000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:19:16.651000 audit: BPF prog-id=13 op=UNLOAD Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit: BPF prog-id=14 op=LOAD Nov 1 00:19:16.651000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcdcbf788 a2=74 a3=95 items=0 ppid=3943 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.651000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:19:16.651000 audit: BPF prog-id=14 op=UNLOAD Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.651000 audit: BPF prog-id=15 op=LOAD Nov 1 00:19:16.651000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcdcbf7e8 a2=94 a3=2 items=0 ppid=3943 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.651000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:19:16.652000 audit: BPF prog-id=15 op=UNLOAD Nov 1 00:19:16.747000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.747000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.747000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.747000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.747000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.747000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.747000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.747000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.747000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.747000 audit: BPF prog-id=16 op=LOAD Nov 1 00:19:16.747000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcdcbf7a8 a2=40 a3=ffffcdcbf7d8 items=0 ppid=3943 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.747000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:19:16.747000 audit: BPF prog-id=16 op=UNLOAD Nov 1 00:19:16.747000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.747000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffcdcbf8c0 a2=50 a3=0 items=0 ppid=3943 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.747000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:19:16.755000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.755000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcdcbf818 a2=28 a3=ffffcdcbf948 items=0 ppid=3943 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.755000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:19:16.755000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.755000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcdcbf848 a2=28 a3=ffffcdcbf978 items=0 ppid=3943 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.755000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:19:16.755000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.755000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcdcbf6f8 a2=28 a3=ffffcdcbf828 items=0 ppid=3943 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.755000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:19:16.755000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.755000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcdcbf868 a2=28 a3=ffffcdcbf998 items=0 ppid=3943 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.755000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:19:16.755000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.755000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcdcbf848 a2=28 a3=ffffcdcbf978 items=0 ppid=3943 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.755000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:19:16.755000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.755000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcdcbf838 a2=28 a3=ffffcdcbf968 items=0 ppid=3943 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.755000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:19:16.755000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.755000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcdcbf868 a2=28 a3=ffffcdcbf998 items=0 ppid=3943 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.755000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:19:16.755000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.755000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcdcbf848 a2=28 a3=ffffcdcbf978 items=0 ppid=3943 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.755000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:19:16.755000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.755000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcdcbf868 a2=28 a3=ffffcdcbf998 items=0 ppid=3943 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.755000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:19:16.755000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.755000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcdcbf838 a2=28 a3=ffffcdcbf968 items=0 ppid=3943 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.755000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcdcbf8b8 a2=28 a3=ffffcdcbf9f8 items=0 ppid=3943 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.756000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffcdcbf5f0 a2=50 a3=0 items=0 ppid=3943 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.756000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit: BPF prog-id=17 op=LOAD Nov 1 00:19:16.756000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcdcbf5f8 a2=94 a3=5 items=0 ppid=3943 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.756000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:19:16.756000 audit: BPF prog-id=17 op=UNLOAD Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffcdcbf700 a2=50 a3=0 items=0 ppid=3943 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.756000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffcdcbf848 a2=4 a3=3 items=0 ppid=3943 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.756000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { confidentiality } for pid=4098 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:19:16.756000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcdcbf828 a2=94 a3=6 items=0 ppid=3943 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.756000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { confidentiality } for pid=4098 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:19:16.756000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcdcbeff8 a2=94 a3=83 items=0 ppid=3943 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.756000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { bpf } for pid=4098 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: AVC avc: denied { perfmon } for pid=4098 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.756000 audit[4098]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcdcbeff8 a2=94 a3=83 items=0 ppid=3943 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.756000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { bpf } for pid=4101 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { bpf } for pid=4101 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { perfmon } for pid=4101 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { perfmon } for pid=4101 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { perfmon } for pid=4101 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { perfmon } for pid=4101 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { perfmon } for pid=4101 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { bpf } for pid=4101 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { bpf } for pid=4101 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit: BPF prog-id=18 op=LOAD Nov 1 00:19:16.767000 audit[4101]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff8f20d88 a2=98 a3=fffff8f20d78 items=0 ppid=3943 pid=4101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.767000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 00:19:16.767000 audit: BPF prog-id=18 op=UNLOAD Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { bpf } for pid=4101 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { bpf } for pid=4101 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { perfmon } for pid=4101 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { perfmon } for pid=4101 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { perfmon } for pid=4101 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { perfmon } for pid=4101 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { perfmon } for pid=4101 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { bpf } for pid=4101 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { bpf } for pid=4101 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit: BPF prog-id=19 op=LOAD Nov 1 00:19:16.767000 audit[4101]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff8f20c38 a2=74 a3=95 items=0 ppid=3943 pid=4101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.767000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 00:19:16.767000 audit: BPF prog-id=19 op=UNLOAD Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { bpf } for pid=4101 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { bpf } for pid=4101 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { perfmon } for pid=4101 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { perfmon } for pid=4101 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { perfmon } for pid=4101 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { perfmon } for pid=4101 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { perfmon } for pid=4101 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { bpf } for pid=4101 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit[4101]: AVC avc: denied { bpf } for pid=4101 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.767000 audit: BPF prog-id=20 op=LOAD Nov 1 00:19:16.767000 audit[4101]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff8f20c68 a2=40 a3=fffff8f20c98 items=0 ppid=3943 pid=4101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.767000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 00:19:16.767000 audit: BPF prog-id=20 op=UNLOAD Nov 1 00:19:16.812067 env[1586]: time="2025-11-01T00:19:16.812012087Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:19:16.816556 env[1586]: time="2025-11-01T00:19:16.815969284Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:19:16.817098 kubelet[2677]: E1101 00:19:16.816828 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:19:16.817098 kubelet[2677]: E1101 00:19:16.816899 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:19:16.817233 kubelet[2677]: E1101 00:19:16.817055 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3429d9572f3c4ccbba53eb23e40c8366,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rqvq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b964bb46-4rknd_calico-system(7f2d9b8c-e77a-4876-aeb0-3b35b890f02a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:19:16.819366 env[1586]: time="2025-11-01T00:19:16.819332561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:19:16.942342 systemd-networkd[1788]: vxlan.calico: Link UP Nov 1 00:19:16.942349 systemd-networkd[1788]: vxlan.calico: Gained carrier Nov 1 00:19:16.956000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.956000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.956000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.956000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.956000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.956000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.956000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.956000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.956000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.956000 audit: BPF prog-id=21 op=LOAD Nov 1 00:19:16.956000 audit[4127]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff89d0998 a2=98 a3=fffff89d0988 items=0 ppid=3943 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.956000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:19:16.960000 audit: BPF prog-id=21 op=UNLOAD Nov 1 00:19:16.960000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.960000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.960000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.960000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.960000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.960000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.960000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.960000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.960000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.960000 audit: BPF prog-id=22 op=LOAD Nov 1 00:19:16.960000 audit[4127]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff89d0678 a2=74 a3=95 items=0 ppid=3943 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.960000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:19:16.964000 audit: BPF prog-id=22 op=UNLOAD Nov 1 00:19:16.964000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.964000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.964000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.964000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.964000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.964000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.964000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.964000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.964000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.964000 audit: BPF prog-id=23 op=LOAD Nov 1 00:19:16.964000 audit[4127]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff89d06d8 a2=94 a3=2 items=0 ppid=3943 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.964000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:19:16.964000 audit: BPF prog-id=23 op=UNLOAD Nov 1 00:19:16.964000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.964000 audit[4127]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff89d0708 a2=28 a3=fffff89d0838 items=0 ppid=3943 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.964000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:19:16.968000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.968000 audit[4127]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff89d0738 a2=28 a3=fffff89d0868 items=0 ppid=3943 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.968000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:19:16.968000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.968000 audit[4127]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff89d05e8 a2=28 a3=fffff89d0718 items=0 ppid=3943 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.968000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:19:16.968000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.968000 audit[4127]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff89d0758 a2=28 a3=fffff89d0888 items=0 ppid=3943 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.968000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:19:16.968000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.968000 audit[4127]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff89d0738 a2=28 a3=fffff89d0868 items=0 ppid=3943 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.968000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:19:16.968000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.968000 audit[4127]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff89d0728 a2=28 a3=fffff89d0858 items=0 ppid=3943 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.968000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:19:16.968000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.968000 audit[4127]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff89d0758 a2=28 a3=fffff89d0888 items=0 ppid=3943 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.968000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:19:16.968000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.968000 audit[4127]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff89d0738 a2=28 a3=fffff89d0868 items=0 ppid=3943 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.968000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:19:16.968000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.968000 audit[4127]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff89d0758 a2=28 a3=fffff89d0888 items=0 ppid=3943 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.968000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:19:16.968000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.968000 audit[4127]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=fffff89d0728 a2=28 a3=fffff89d0858 items=0 ppid=3943 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.968000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:19:16.968000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.968000 audit[4127]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=fffff89d07a8 a2=28 a3=fffff89d08e8 items=0 ppid=3943 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.968000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:19:16.969000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.969000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.969000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.969000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.969000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.969000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.969000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.969000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.969000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.969000 audit: BPF prog-id=24 op=LOAD Nov 1 00:19:16.969000 audit[4127]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff89d05c8 a2=40 a3=fffff89d05f8 items=0 ppid=3943 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.969000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:19:16.970000 audit: BPF prog-id=24 op=UNLOAD Nov 1 00:19:16.970000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.970000 audit[4127]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=fffff89d05f0 a2=50 a3=0 items=0 ppid=3943 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.970000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:19:16.971000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.971000 audit[4127]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=fffff89d05f0 a2=50 a3=0 items=0 ppid=3943 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:19:16.971000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.971000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.971000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.971000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.971000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.971000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.971000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.971000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.971000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.971000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.971000 audit: BPF prog-id=25 op=LOAD Nov 1 00:19:16.971000 audit[4127]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff89cfd58 a2=94 a3=2 items=0 ppid=3943 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:19:16.971000 audit: BPF prog-id=25 op=UNLOAD Nov 1 00:19:16.971000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.971000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.971000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.971000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.971000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.971000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.971000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.971000 audit[4127]: AVC avc: denied { perfmon } for pid=4127 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.971000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.971000 audit[4127]: AVC avc: denied { bpf } for pid=4127 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.971000 audit: BPF prog-id=26 op=LOAD Nov 1 00:19:16.971000 audit[4127]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff89cfee8 a2=94 a3=30 items=0 ppid=3943 pid=4127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.971000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:19:16.974000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.974000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.974000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.974000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.974000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.974000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.974000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.974000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.974000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.974000 audit: BPF prog-id=27 op=LOAD Nov 1 00:19:16.974000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd653e8f8 a2=98 a3=ffffd653e8e8 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.974000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:16.975000 audit: BPF prog-id=27 op=UNLOAD Nov 1 00:19:16.975000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.975000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.975000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.975000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.975000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.975000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.975000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.975000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.975000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.975000 audit: BPF prog-id=28 op=LOAD Nov 1 00:19:16.975000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd653e588 a2=74 a3=95 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.975000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:16.976000 audit: BPF prog-id=28 op=UNLOAD Nov 1 00:19:16.976000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.976000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.976000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.976000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.976000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.976000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.976000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.976000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.976000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:16.976000 audit: BPF prog-id=29 op=LOAD Nov 1 00:19:16.976000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd653e5e8 a2=94 a3=2 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:16.976000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:16.977000 audit: BPF prog-id=29 op=UNLOAD Nov 1 00:19:17.057129 env[1586]: time="2025-11-01T00:19:17.056997094Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:19:17.067506 env[1586]: time="2025-11-01T00:19:17.067403606Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:19:17.067753 kubelet[2677]: E1101 00:19:17.067711 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:19:17.067996 kubelet[2677]: E1101 00:19:17.067772 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:19:17.069667 kubelet[2677]: E1101 00:19:17.069574 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqvq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b964bb46-4rknd_calico-system(7f2d9b8c-e77a-4876-aeb0-3b35b890f02a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:19:17.070831 kubelet[2677]: E1101 00:19:17.070770 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b964bb46-4rknd" podUID="7f2d9b8c-e77a-4876-aeb0-3b35b890f02a" Nov 1 00:19:17.072000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.072000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.072000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.072000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.072000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.072000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.072000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.072000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.072000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.072000 audit: BPF prog-id=30 op=LOAD Nov 1 00:19:17.072000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd653e5a8 a2=40 a3=ffffd653e5d8 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.072000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.072000 audit: BPF prog-id=30 op=UNLOAD Nov 1 00:19:17.073000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.073000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffd653e6c0 a2=50 a3=0 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.073000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.081000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.081000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd653e618 a2=28 a3=ffffd653e748 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.081000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.082000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.082000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd653e648 a2=28 a3=ffffd653e778 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.082000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.082000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.082000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd653e4f8 a2=28 a3=ffffd653e628 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.082000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.082000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.082000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd653e668 a2=28 a3=ffffd653e798 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.082000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.082000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.082000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd653e648 a2=28 a3=ffffd653e778 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.082000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.082000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.082000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd653e638 a2=28 a3=ffffd653e768 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.082000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.083000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.083000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd653e668 a2=28 a3=ffffd653e798 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.083000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.083000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.083000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd653e648 a2=28 a3=ffffd653e778 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.083000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.083000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.083000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd653e668 a2=28 a3=ffffd653e798 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.083000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.083000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.083000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffd653e638 a2=28 a3=ffffd653e768 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.083000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.083000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.083000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffd653e6b8 a2=28 a3=ffffd653e7f8 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.083000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.084000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.084000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffd653e3f0 a2=50 a3=0 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.084000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.084000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.084000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.084000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.084000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.084000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.084000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.084000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.084000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.084000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.084000 audit: BPF prog-id=31 op=LOAD Nov 1 00:19:17.084000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd653e3f8 a2=94 a3=5 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.084000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.084000 audit: BPF prog-id=31 op=UNLOAD Nov 1 00:19:17.084000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.084000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffd653e500 a2=50 a3=0 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.084000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.084000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.084000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffd653e648 a2=4 a3=3 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.084000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.085000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.085000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.085000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.085000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.085000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.085000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.085000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.085000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.085000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.085000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.085000 audit[4132]: AVC avc: denied { confidentiality } for pid=4132 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:19:17.085000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd653e628 a2=94 a3=6 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.085000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.085000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.085000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.085000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.085000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.085000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.085000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.085000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.085000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.085000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.085000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.085000 audit[4132]: AVC avc: denied { confidentiality } for pid=4132 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:19:17.085000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd653ddf8 a2=94 a3=83 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.085000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.086000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.086000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.086000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.086000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.086000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.086000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.086000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.086000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.086000 audit[4132]: AVC avc: denied { perfmon } for pid=4132 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.086000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.086000 audit[4132]: AVC avc: denied { confidentiality } for pid=4132 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:19:17.086000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffd653ddf8 a2=94 a3=83 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.086000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.087000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.087000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd653f838 a2=10 a3=ffffd653f928 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.087000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.087000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.087000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd653f6f8 a2=10 a3=ffffd653f7e8 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.087000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.088000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.088000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd653f668 a2=10 a3=ffffd653f7e8 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.088000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.088000 audit[4132]: AVC avc: denied { bpf } for pid=4132 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:19:17.088000 audit[4132]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd653f668 a2=10 a3=ffffd653f7e8 items=0 ppid=3943 pid=4132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.088000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:19:17.094000 audit: BPF prog-id=26 op=UNLOAD Nov 1 00:19:17.301000 audit[4160]: NETFILTER_CFG table=mangle:108 family=2 entries=16 op=nft_register_chain pid=4160 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:19:17.301000 audit[4160]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffe99e4620 a2=0 a3=ffff8e5b8fa8 items=0 ppid=3943 pid=4160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.301000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:19:17.343000 audit[4158]: NETFILTER_CFG table=nat:109 family=2 entries=15 op=nft_register_chain pid=4158 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:19:17.343000 audit[4158]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffd11c29c0 a2=0 a3=ffffab055fa8 items=0 ppid=3943 pid=4158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.343000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:19:17.374000 audit[4157]: NETFILTER_CFG table=raw:110 family=2 entries=21 op=nft_register_chain pid=4157 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:19:17.374000 audit[4157]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffe1223f80 a2=0 a3=ffff9b51ffa8 items=0 ppid=3943 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.374000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:19:17.391000 audit[4161]: NETFILTER_CFG table=filter:111 family=2 entries=94 op=nft_register_chain pid=4161 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:19:17.391000 audit[4161]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffcef90170 a2=0 a3=ffffaf230fa8 items=0 ppid=3943 pid=4161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.391000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:19:17.617669 kubelet[2677]: E1101 00:19:17.617517 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b964bb46-4rknd" podUID="7f2d9b8c-e77a-4876-aeb0-3b35b890f02a" Nov 1 00:19:17.645000 audit[4173]: NETFILTER_CFG table=filter:112 family=2 entries=20 op=nft_register_rule pid=4173 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:19:17.645000 audit[4173]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffea993b70 a2=0 a3=1 items=0 ppid=2782 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.645000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:19:17.653000 audit[4173]: NETFILTER_CFG table=nat:113 family=2 entries=14 op=nft_register_rule pid=4173 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:19:17.653000 audit[4173]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffea993b70 a2=0 a3=1 items=0 ppid=2782 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:17.653000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:19:18.125428 systemd-networkd[1788]: vxlan.calico: Gained IPv6LL Nov 1 00:19:18.253391 systemd-networkd[1788]: cali032f776ccbd: Gained IPv6LL Nov 1 00:19:20.354427 env[1586]: time="2025-11-01T00:19:20.354098965Z" level=info msg="StopPodSandbox for \"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9\"" Nov 1 00:19:20.532825 env[1586]: 2025-11-01 00:19:20.422 [INFO][4185] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" Nov 1 00:19:20.532825 env[1586]: 2025-11-01 00:19:20.422 [INFO][4185] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" iface="eth0" netns="/var/run/netns/cni-6018dae0-7bde-62fc-61e1-623fdb90b140" Nov 1 00:19:20.532825 env[1586]: 2025-11-01 00:19:20.423 [INFO][4185] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" iface="eth0" netns="/var/run/netns/cni-6018dae0-7bde-62fc-61e1-623fdb90b140" Nov 1 00:19:20.532825 env[1586]: 2025-11-01 00:19:20.423 [INFO][4185] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" iface="eth0" netns="/var/run/netns/cni-6018dae0-7bde-62fc-61e1-623fdb90b140" Nov 1 00:19:20.532825 env[1586]: 2025-11-01 00:19:20.423 [INFO][4185] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" Nov 1 00:19:20.532825 env[1586]: 2025-11-01 00:19:20.423 [INFO][4185] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" Nov 1 00:19:20.532825 env[1586]: 2025-11-01 00:19:20.442 [INFO][4192] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" HandleID="k8s-pod-network.99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0" Nov 1 00:19:20.532825 env[1586]: 2025-11-01 00:19:20.442 [INFO][4192] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:20.532825 env[1586]: 2025-11-01 00:19:20.442 [INFO][4192] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:20.532825 env[1586]: 2025-11-01 00:19:20.523 [WARNING][4192] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" HandleID="k8s-pod-network.99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0" Nov 1 00:19:20.532825 env[1586]: 2025-11-01 00:19:20.523 [INFO][4192] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" HandleID="k8s-pod-network.99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0" Nov 1 00:19:20.532825 env[1586]: 2025-11-01 00:19:20.525 [INFO][4192] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:20.532825 env[1586]: 2025-11-01 00:19:20.527 [INFO][4185] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" Nov 1 00:19:20.532825 env[1586]: time="2025-11-01T00:19:20.531758272Z" level=info msg="TearDown network for sandbox \"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9\" successfully" Nov 1 00:19:20.532825 env[1586]: time="2025-11-01T00:19:20.531802992Z" level=info msg="StopPodSandbox for \"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9\" returns successfully" Nov 1 00:19:20.532825 env[1586]: time="2025-11-01T00:19:20.532429192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86c5674785-bs7n8,Uid:57cd90f3-35a2-40bb-93fb-693c3ffcd73d,Namespace:calico-system,Attempt:1,}" Nov 1 00:19:20.533918 systemd[1]: run-netns-cni\x2d6018dae0\x2d7bde\x2d62fc\x2d61e1\x2d623fdb90b140.mount: Deactivated successfully. Nov 1 00:19:20.689133 systemd-networkd[1788]: cali4cfc28f5684: Link UP Nov 1 00:19:20.697687 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:19:20.697785 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4cfc28f5684: link becomes ready Nov 1 00:19:20.698127 systemd-networkd[1788]: cali4cfc28f5684: Gained carrier Nov 1 00:19:20.722475 env[1586]: 2025-11-01 00:19:20.610 [INFO][4198] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0 calico-kube-controllers-86c5674785- calico-system 57cd90f3-35a2-40bb-93fb-693c3ffcd73d 960 0 2025-11-01 00:18:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86c5674785 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.8-n-c51a7922c9 calico-kube-controllers-86c5674785-bs7n8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4cfc28f5684 [] [] }} ContainerID="fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b" Namespace="calico-system" Pod="calico-kube-controllers-86c5674785-bs7n8" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-" Nov 1 00:19:20.722475 env[1586]: 2025-11-01 00:19:20.610 [INFO][4198] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b" Namespace="calico-system" Pod="calico-kube-controllers-86c5674785-bs7n8" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0" Nov 1 00:19:20.722475 env[1586]: 2025-11-01 00:19:20.637 [INFO][4211] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b" HandleID="k8s-pod-network.fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0" Nov 1 00:19:20.722475 env[1586]: 2025-11-01 00:19:20.637 [INFO][4211] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b" HandleID="k8s-pod-network.fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-c51a7922c9", "pod":"calico-kube-controllers-86c5674785-bs7n8", "timestamp":"2025-11-01 00:19:20.637688633 +0000 UTC"}, Hostname:"ci-3510.3.8-n-c51a7922c9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:19:20.722475 env[1586]: 2025-11-01 00:19:20.637 [INFO][4211] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:20.722475 env[1586]: 2025-11-01 00:19:20.637 [INFO][4211] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:20.722475 env[1586]: 2025-11-01 00:19:20.637 [INFO][4211] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-c51a7922c9' Nov 1 00:19:20.722475 env[1586]: 2025-11-01 00:19:20.648 [INFO][4211] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:20.722475 env[1586]: 2025-11-01 00:19:20.652 [INFO][4211] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:20.722475 env[1586]: 2025-11-01 00:19:20.656 [INFO][4211] ipam/ipam.go 511: Trying affinity for 192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:20.722475 env[1586]: 2025-11-01 00:19:20.657 [INFO][4211] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:20.722475 env[1586]: 2025-11-01 00:19:20.659 [INFO][4211] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:20.722475 env[1586]: 2025-11-01 00:19:20.659 [INFO][4211] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.0/26 handle="k8s-pod-network.fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:20.722475 env[1586]: 2025-11-01 00:19:20.660 [INFO][4211] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b Nov 1 00:19:20.722475 env[1586]: 2025-11-01 00:19:20.669 [INFO][4211] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.0/26 handle="k8s-pod-network.fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:20.722475 env[1586]: 2025-11-01 00:19:20.675 [INFO][4211] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.2/26] block=192.168.15.0/26 handle="k8s-pod-network.fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:20.722475 env[1586]: 2025-11-01 00:19:20.675 [INFO][4211] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.2/26] handle="k8s-pod-network.fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:20.722475 env[1586]: 2025-11-01 00:19:20.675 [INFO][4211] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:20.722475 env[1586]: 2025-11-01 00:19:20.675 [INFO][4211] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.2/26] IPv6=[] ContainerID="fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b" HandleID="k8s-pod-network.fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0" Nov 1 00:19:20.723052 env[1586]: 2025-11-01 00:19:20.677 [INFO][4198] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b" Namespace="calico-system" Pod="calico-kube-controllers-86c5674785-bs7n8" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0", GenerateName:"calico-kube-controllers-86c5674785-", Namespace:"calico-system", SelfLink:"", UID:"57cd90f3-35a2-40bb-93fb-693c3ffcd73d", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86c5674785", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"", Pod:"calico-kube-controllers-86c5674785-bs7n8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4cfc28f5684", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:20.723052 env[1586]: 2025-11-01 00:19:20.677 [INFO][4198] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.2/32] ContainerID="fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b" Namespace="calico-system" Pod="calico-kube-controllers-86c5674785-bs7n8" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0" Nov 1 00:19:20.723052 env[1586]: 2025-11-01 00:19:20.677 [INFO][4198] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4cfc28f5684 ContainerID="fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b" Namespace="calico-system" Pod="calico-kube-controllers-86c5674785-bs7n8" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0" Nov 1 00:19:20.723052 env[1586]: 2025-11-01 00:19:20.698 [INFO][4198] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b" Namespace="calico-system" Pod="calico-kube-controllers-86c5674785-bs7n8" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0" Nov 1 00:19:20.723052 env[1586]: 2025-11-01 00:19:20.699 [INFO][4198] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b" Namespace="calico-system" Pod="calico-kube-controllers-86c5674785-bs7n8" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0", GenerateName:"calico-kube-controllers-86c5674785-", Namespace:"calico-system", SelfLink:"", UID:"57cd90f3-35a2-40bb-93fb-693c3ffcd73d", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86c5674785", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b", Pod:"calico-kube-controllers-86c5674785-bs7n8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4cfc28f5684", MAC:"66:9a:8a:85:79:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:20.723052 env[1586]: 2025-11-01 00:19:20.715 [INFO][4198] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b" Namespace="calico-system" Pod="calico-kube-controllers-86c5674785-bs7n8" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0" Nov 1 00:19:20.735000 audit[4229]: NETFILTER_CFG table=filter:114 family=2 entries=36 op=nft_register_chain pid=4229 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:19:20.735000 audit[4229]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=ffffe8226e50 a2=0 a3=ffffaecaafa8 items=0 ppid=3943 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:20.735000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:19:20.738759 env[1586]: time="2025-11-01T00:19:20.738694717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:19:20.738902 env[1586]: time="2025-11-01T00:19:20.738880117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:19:20.738996 env[1586]: time="2025-11-01T00:19:20.738976157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:19:20.743516 env[1586]: time="2025-11-01T00:19:20.739275836Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b pid=4238 runtime=io.containerd.runc.v2 Nov 1 00:19:20.785017 env[1586]: time="2025-11-01T00:19:20.784981402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86c5674785-bs7n8,Uid:57cd90f3-35a2-40bb-93fb-693c3ffcd73d,Namespace:calico-system,Attempt:1,} returns sandbox id \"fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b\"" Nov 1 00:19:20.786804 env[1586]: time="2025-11-01T00:19:20.786779121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:19:21.014564 env[1586]: time="2025-11-01T00:19:21.014510830Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:19:21.017569 env[1586]: time="2025-11-01T00:19:21.017507268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:19:21.017999 kubelet[2677]: E1101 00:19:21.017776 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:19:21.017999 kubelet[2677]: E1101 00:19:21.017821 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:19:21.019172 kubelet[2677]: E1101 00:19:21.019098 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gzr2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86c5674785-bs7n8_calico-system(57cd90f3-35a2-40bb-93fb-693c3ffcd73d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:19:21.020334 kubelet[2677]: E1101 00:19:21.020267 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" podUID="57cd90f3-35a2-40bb-93fb-693c3ffcd73d" Nov 1 00:19:21.353177 env[1586]: time="2025-11-01T00:19:21.353073499Z" level=info msg="StopPodSandbox for \"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a\"" Nov 1 00:19:21.354031 env[1586]: time="2025-11-01T00:19:21.353998618Z" level=info msg="StopPodSandbox for \"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696\"" Nov 1 00:19:21.457386 env[1586]: 2025-11-01 00:19:21.414 [INFO][4298] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" Nov 1 00:19:21.457386 env[1586]: 2025-11-01 00:19:21.415 [INFO][4298] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" iface="eth0" netns="/var/run/netns/cni-dfbaa047-541a-47aa-b3bd-0e15808583cd" Nov 1 00:19:21.457386 env[1586]: 2025-11-01 00:19:21.416 [INFO][4298] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" iface="eth0" netns="/var/run/netns/cni-dfbaa047-541a-47aa-b3bd-0e15808583cd" Nov 1 00:19:21.457386 env[1586]: 2025-11-01 00:19:21.416 [INFO][4298] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" iface="eth0" netns="/var/run/netns/cni-dfbaa047-541a-47aa-b3bd-0e15808583cd" Nov 1 00:19:21.457386 env[1586]: 2025-11-01 00:19:21.416 [INFO][4298] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" Nov 1 00:19:21.457386 env[1586]: 2025-11-01 00:19:21.416 [INFO][4298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" Nov 1 00:19:21.457386 env[1586]: 2025-11-01 00:19:21.442 [INFO][4307] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" HandleID="k8s-pod-network.253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" Workload="ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0" Nov 1 00:19:21.457386 env[1586]: 2025-11-01 00:19:21.442 [INFO][4307] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:21.457386 env[1586]: 2025-11-01 00:19:21.442 [INFO][4307] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:21.457386 env[1586]: 2025-11-01 00:19:21.450 [WARNING][4307] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" HandleID="k8s-pod-network.253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" Workload="ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0" Nov 1 00:19:21.457386 env[1586]: 2025-11-01 00:19:21.450 [INFO][4307] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" HandleID="k8s-pod-network.253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" Workload="ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0" Nov 1 00:19:21.457386 env[1586]: 2025-11-01 00:19:21.452 [INFO][4307] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:21.457386 env[1586]: 2025-11-01 00:19:21.454 [INFO][4298] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" Nov 1 00:19:21.458510 env[1586]: time="2025-11-01T00:19:21.458469021Z" level=info msg="TearDown network for sandbox \"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696\" successfully" Nov 1 00:19:21.458589 env[1586]: time="2025-11-01T00:19:21.458534701Z" level=info msg="StopPodSandbox for \"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696\" returns successfully" Nov 1 00:19:21.459276 env[1586]: time="2025-11-01T00:19:21.459244300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pw8c5,Uid:1e69bd0a-b324-4064-9086-3d6aa0d23b51,Namespace:calico-system,Attempt:1,}" Nov 1 00:19:21.466396 env[1586]: 2025-11-01 00:19:21.421 [INFO][4288] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" Nov 1 00:19:21.466396 env[1586]: 2025-11-01 00:19:21.421 [INFO][4288] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" iface="eth0" netns="/var/run/netns/cni-2df4b3b3-5429-6499-e97d-06581733a76f" Nov 1 00:19:21.466396 env[1586]: 2025-11-01 00:19:21.421 [INFO][4288] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" iface="eth0" netns="/var/run/netns/cni-2df4b3b3-5429-6499-e97d-06581733a76f" Nov 1 00:19:21.466396 env[1586]: 2025-11-01 00:19:21.421 [INFO][4288] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" iface="eth0" netns="/var/run/netns/cni-2df4b3b3-5429-6499-e97d-06581733a76f" Nov 1 00:19:21.466396 env[1586]: 2025-11-01 00:19:21.421 [INFO][4288] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" Nov 1 00:19:21.466396 env[1586]: 2025-11-01 00:19:21.421 [INFO][4288] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" Nov 1 00:19:21.466396 env[1586]: 2025-11-01 00:19:21.448 [INFO][4312] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" HandleID="k8s-pod-network.a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0" Nov 1 00:19:21.466396 env[1586]: 2025-11-01 00:19:21.448 [INFO][4312] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:21.466396 env[1586]: 2025-11-01 00:19:21.452 [INFO][4312] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:21.466396 env[1586]: 2025-11-01 00:19:21.461 [WARNING][4312] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" HandleID="k8s-pod-network.a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0" Nov 1 00:19:21.466396 env[1586]: 2025-11-01 00:19:21.461 [INFO][4312] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" HandleID="k8s-pod-network.a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0" Nov 1 00:19:21.466396 env[1586]: 2025-11-01 00:19:21.463 [INFO][4312] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:21.466396 env[1586]: 2025-11-01 00:19:21.465 [INFO][4288] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" Nov 1 00:19:21.466975 env[1586]: time="2025-11-01T00:19:21.466948815Z" level=info msg="TearDown network for sandbox \"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a\" successfully" Nov 1 00:19:21.467072 env[1586]: time="2025-11-01T00:19:21.467054255Z" level=info msg="StopPodSandbox for \"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a\" returns successfully" Nov 1 00:19:21.467797 env[1586]: time="2025-11-01T00:19:21.467774254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-87vvp,Uid:a7634ab8-ff62-48dd-9eee-61be2b01d0bb,Namespace:kube-system,Attempt:1,}" Nov 1 00:19:21.534085 systemd[1]: run-containerd-runc-k8s.io-fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b-runc.tilKPZ.mount: Deactivated successfully. Nov 1 00:19:21.534216 systemd[1]: run-netns-cni\x2d2df4b3b3\x2d5429\x2d6499\x2de97d\x2d06581733a76f.mount: Deactivated successfully. Nov 1 00:19:21.534307 systemd[1]: run-netns-cni\x2ddfbaa047\x2d541a\x2d47aa\x2db3bd\x2d0e15808583cd.mount: Deactivated successfully. Nov 1 00:19:21.626814 kubelet[2677]: E1101 00:19:21.626407 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" podUID="57cd90f3-35a2-40bb-93fb-693c3ffcd73d" Nov 1 00:19:21.655090 systemd-networkd[1788]: cali6e7a515feec: Link UP Nov 1 00:19:21.668610 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6e7a515feec: link becomes ready Nov 1 00:19:21.667848 systemd-networkd[1788]: cali6e7a515feec: Gained carrier Nov 1 00:19:21.681708 env[1586]: 2025-11-01 00:19:21.540 [INFO][4321] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0 goldmane-666569f655- calico-system 1e69bd0a-b324-4064-9086-3d6aa0d23b51 971 0 2025-11-01 00:18:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-3510.3.8-n-c51a7922c9 goldmane-666569f655-pw8c5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6e7a515feec [] [] }} ContainerID="14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e" Namespace="calico-system" Pod="goldmane-666569f655-pw8c5" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-" Nov 1 00:19:21.681708 env[1586]: 2025-11-01 00:19:21.540 [INFO][4321] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e" Namespace="calico-system" Pod="goldmane-666569f655-pw8c5" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0" Nov 1 00:19:21.681708 env[1586]: 2025-11-01 00:19:21.592 [INFO][4343] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e" HandleID="k8s-pod-network.14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e" Workload="ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0" Nov 1 00:19:21.681708 env[1586]: 2025-11-01 00:19:21.592 [INFO][4343] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e" HandleID="k8s-pod-network.14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e" Workload="ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-c51a7922c9", "pod":"goldmane-666569f655-pw8c5", "timestamp":"2025-11-01 00:19:21.592713721 +0000 UTC"}, Hostname:"ci-3510.3.8-n-c51a7922c9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:19:21.681708 env[1586]: 2025-11-01 00:19:21.593 [INFO][4343] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:21.681708 env[1586]: 2025-11-01 00:19:21.593 [INFO][4343] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:21.681708 env[1586]: 2025-11-01 00:19:21.593 [INFO][4343] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-c51a7922c9' Nov 1 00:19:21.681708 env[1586]: 2025-11-01 00:19:21.601 [INFO][4343] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:21.681708 env[1586]: 2025-11-01 00:19:21.605 [INFO][4343] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:21.681708 env[1586]: 2025-11-01 00:19:21.608 [INFO][4343] ipam/ipam.go 511: Trying affinity for 192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:21.681708 env[1586]: 2025-11-01 00:19:21.610 [INFO][4343] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:21.681708 env[1586]: 2025-11-01 00:19:21.614 [INFO][4343] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:21.681708 env[1586]: 2025-11-01 00:19:21.614 [INFO][4343] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.0/26 handle="k8s-pod-network.14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:21.681708 env[1586]: 2025-11-01 00:19:21.616 [INFO][4343] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e Nov 1 00:19:21.681708 env[1586]: 2025-11-01 00:19:21.621 [INFO][4343] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.0/26 handle="k8s-pod-network.14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:21.681708 env[1586]: 2025-11-01 00:19:21.631 [INFO][4343] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.3/26] block=192.168.15.0/26 handle="k8s-pod-network.14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:21.681708 env[1586]: 2025-11-01 00:19:21.631 [INFO][4343] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.3/26] handle="k8s-pod-network.14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:21.681708 env[1586]: 2025-11-01 00:19:21.631 [INFO][4343] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:21.681708 env[1586]: 2025-11-01 00:19:21.631 [INFO][4343] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.3/26] IPv6=[] ContainerID="14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e" HandleID="k8s-pod-network.14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e" Workload="ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0" Nov 1 00:19:21.682407 env[1586]: 2025-11-01 00:19:21.638 [INFO][4321] cni-plugin/k8s.go 418: Populated endpoint ContainerID="14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e" Namespace="calico-system" Pod="goldmane-666569f655-pw8c5" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1e69bd0a-b324-4064-9086-3d6aa0d23b51", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"", Pod:"goldmane-666569f655-pw8c5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6e7a515feec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:21.682407 env[1586]: 2025-11-01 00:19:21.638 [INFO][4321] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.3/32] ContainerID="14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e" Namespace="calico-system" Pod="goldmane-666569f655-pw8c5" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0" Nov 1 00:19:21.682407 env[1586]: 2025-11-01 00:19:21.638 [INFO][4321] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e7a515feec ContainerID="14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e" Namespace="calico-system" Pod="goldmane-666569f655-pw8c5" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0" Nov 1 00:19:21.682407 env[1586]: 2025-11-01 00:19:21.655 [INFO][4321] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e" Namespace="calico-system" Pod="goldmane-666569f655-pw8c5" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0" Nov 1 00:19:21.682407 env[1586]: 2025-11-01 00:19:21.655 [INFO][4321] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e" Namespace="calico-system" Pod="goldmane-666569f655-pw8c5" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1e69bd0a-b324-4064-9086-3d6aa0d23b51", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e", Pod:"goldmane-666569f655-pw8c5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6e7a515feec", MAC:"72:18:92:20:33:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:21.682407 env[1586]: 2025-11-01 00:19:21.677 [INFO][4321] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e" Namespace="calico-system" Pod="goldmane-666569f655-pw8c5" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0" Nov 1 00:19:21.692768 env[1586]: time="2025-11-01T00:19:21.692693167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:19:21.692982 env[1586]: time="2025-11-01T00:19:21.692957607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:19:21.693254 env[1586]: time="2025-11-01T00:19:21.693130327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:19:21.693347 env[1586]: time="2025-11-01T00:19:21.693313607Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e pid=4376 runtime=io.containerd.runc.v2 Nov 1 00:19:21.704000 audit[4387]: NETFILTER_CFG table=filter:115 family=2 entries=48 op=nft_register_chain pid=4387 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:19:21.711103 kernel: kauditd_printk_skb: 562 callbacks suppressed Nov 1 00:19:21.711224 kernel: audit: type=1325 audit(1761956361.704:428): table=filter:115 family=2 entries=48 op=nft_register_chain pid=4387 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:19:21.704000 audit[4387]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26368 a0=3 a1=ffffc38a7200 a2=0 a3=ffffa43f5fa8 items=0 ppid=3943 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:21.734270 systemd[1]: run-containerd-runc-k8s.io-14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e-runc.TJlWJs.mount: Deactivated successfully. Nov 1 00:19:21.766507 kernel: audit: type=1300 audit(1761956361.704:428): arch=c00000b7 syscall=211 success=yes exit=26368 a0=3 a1=ffffc38a7200 a2=0 a3=ffffa43f5fa8 items=0 ppid=3943 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:21.704000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:19:21.783112 kernel: audit: type=1327 audit(1761956361.704:428): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:19:21.827883 env[1586]: time="2025-11-01T00:19:21.827710107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-pw8c5,Uid:1e69bd0a-b324-4064-9086-3d6aa0d23b51,Namespace:calico-system,Attempt:1,} returns sandbox id \"14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e\"" Nov 1 00:19:21.830278 systemd-networkd[1788]: cali22593b6a12a: Link UP Nov 1 00:19:21.845266 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:19:21.845314 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali22593b6a12a: link becomes ready Nov 1 00:19:21.845342 env[1586]: time="2025-11-01T00:19:21.838168699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:19:21.850831 systemd-networkd[1788]: cali22593b6a12a: Gained carrier Nov 1 00:19:21.870377 env[1586]: 2025-11-01 00:19:21.569 [INFO][4330] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0 coredns-668d6bf9bc- kube-system a7634ab8-ff62-48dd-9eee-61be2b01d0bb 972 0 2025-11-01 00:18:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-c51a7922c9 coredns-668d6bf9bc-87vvp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali22593b6a12a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7" Namespace="kube-system" Pod="coredns-668d6bf9bc-87vvp" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-" Nov 1 00:19:21.870377 env[1586]: 2025-11-01 00:19:21.570 [INFO][4330] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7" Namespace="kube-system" Pod="coredns-668d6bf9bc-87vvp" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0" Nov 1 00:19:21.870377 env[1586]: 2025-11-01 00:19:21.611 [INFO][4349] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7" HandleID="k8s-pod-network.6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0" Nov 1 00:19:21.870377 env[1586]: 2025-11-01 00:19:21.611 [INFO][4349] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7" HandleID="k8s-pod-network.6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003233a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-c51a7922c9", "pod":"coredns-668d6bf9bc-87vvp", "timestamp":"2025-11-01 00:19:21.611209268 +0000 UTC"}, Hostname:"ci-3510.3.8-n-c51a7922c9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:19:21.870377 env[1586]: 2025-11-01 00:19:21.611 [INFO][4349] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:21.870377 env[1586]: 2025-11-01 00:19:21.632 [INFO][4349] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:21.870377 env[1586]: 2025-11-01 00:19:21.632 [INFO][4349] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-c51a7922c9' Nov 1 00:19:21.870377 env[1586]: 2025-11-01 00:19:21.702 [INFO][4349] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:21.870377 env[1586]: 2025-11-01 00:19:21.716 [INFO][4349] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:21.870377 env[1586]: 2025-11-01 00:19:21.737 [INFO][4349] ipam/ipam.go 511: Trying affinity for 192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:21.870377 env[1586]: 2025-11-01 00:19:21.767 [INFO][4349] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:21.870377 env[1586]: 2025-11-01 00:19:21.793 [INFO][4349] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:21.870377 env[1586]: 2025-11-01 00:19:21.793 [INFO][4349] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.0/26 handle="k8s-pod-network.6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:21.870377 env[1586]: 2025-11-01 00:19:21.795 [INFO][4349] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7 Nov 1 00:19:21.870377 env[1586]: 2025-11-01 00:19:21.808 [INFO][4349] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.0/26 handle="k8s-pod-network.6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:21.870377 env[1586]: 2025-11-01 00:19:21.823 [INFO][4349] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.4/26] block=192.168.15.0/26 handle="k8s-pod-network.6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:21.870377 env[1586]: 2025-11-01 00:19:21.823 [INFO][4349] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.4/26] handle="k8s-pod-network.6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:21.870377 env[1586]: 2025-11-01 00:19:21.823 [INFO][4349] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:21.870377 env[1586]: 2025-11-01 00:19:21.823 [INFO][4349] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.4/26] IPv6=[] ContainerID="6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7" HandleID="k8s-pod-network.6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0" Nov 1 00:19:21.873096 env[1586]: 2025-11-01 00:19:21.825 [INFO][4330] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7" Namespace="kube-system" Pod="coredns-668d6bf9bc-87vvp" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a7634ab8-ff62-48dd-9eee-61be2b01d0bb", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"", Pod:"coredns-668d6bf9bc-87vvp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22593b6a12a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:21.873096 env[1586]: 2025-11-01 00:19:21.825 [INFO][4330] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.4/32] ContainerID="6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7" Namespace="kube-system" Pod="coredns-668d6bf9bc-87vvp" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0" Nov 1 00:19:21.873096 env[1586]: 2025-11-01 00:19:21.825 [INFO][4330] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22593b6a12a ContainerID="6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7" Namespace="kube-system" Pod="coredns-668d6bf9bc-87vvp" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0" Nov 1 00:19:21.873096 env[1586]: 2025-11-01 00:19:21.853 [INFO][4330] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7" Namespace="kube-system" Pod="coredns-668d6bf9bc-87vvp" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0" Nov 1 00:19:21.873096 env[1586]: 2025-11-01 00:19:21.853 [INFO][4330] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7" Namespace="kube-system" Pod="coredns-668d6bf9bc-87vvp" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a7634ab8-ff62-48dd-9eee-61be2b01d0bb", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7", Pod:"coredns-668d6bf9bc-87vvp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22593b6a12a", MAC:"f6:bf:a1:60:f8:d2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:21.873096 env[1586]: 2025-11-01 00:19:21.867 [INFO][4330] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7" Namespace="kube-system" Pod="coredns-668d6bf9bc-87vvp" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0" Nov 1 00:19:21.880000 audit[4422]: NETFILTER_CFG table=filter:116 family=2 entries=50 op=nft_register_chain pid=4422 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:19:21.880000 audit[4422]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24928 a0=3 a1=fffff15c8d80 a2=0 a3=ffffa644afa8 items=0 ppid=3943 pid=4422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:21.924801 kernel: audit: type=1325 audit(1761956361.880:429): table=filter:116 family=2 entries=50 op=nft_register_chain pid=4422 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:19:21.924977 kernel: audit: type=1300 audit(1761956361.880:429): arch=c00000b7 syscall=211 success=yes exit=24928 a0=3 a1=fffff15c8d80 a2=0 a3=ffffa644afa8 items=0 ppid=3943 pid=4422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:21.880000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:19:21.925506 systemd-networkd[1788]: cali4cfc28f5684: Gained IPv6LL Nov 1 00:19:21.941338 kernel: audit: type=1327 audit(1761956361.880:429): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:19:21.943797 env[1586]: time="2025-11-01T00:19:21.943057382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:19:21.943797 env[1586]: time="2025-11-01T00:19:21.943127342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:19:21.943797 env[1586]: time="2025-11-01T00:19:21.943152462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:19:21.943797 env[1586]: time="2025-11-01T00:19:21.943548541Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7 pid=4431 runtime=io.containerd.runc.v2 Nov 1 00:19:21.992036 env[1586]: time="2025-11-01T00:19:21.991991385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-87vvp,Uid:a7634ab8-ff62-48dd-9eee-61be2b01d0bb,Namespace:kube-system,Attempt:1,} returns sandbox id \"6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7\"" Nov 1 00:19:21.996338 env[1586]: time="2025-11-01T00:19:21.996250782Z" level=info msg="CreateContainer within sandbox \"6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:19:22.035777 env[1586]: time="2025-11-01T00:19:22.035733833Z" level=info msg="CreateContainer within sandbox \"6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fd7d8ee343d207c8ec46a9bee39848094600493d8b4c18be48706401c1d544b2\"" Nov 1 00:19:22.036965 env[1586]: time="2025-11-01T00:19:22.036614833Z" level=info msg="StartContainer for \"fd7d8ee343d207c8ec46a9bee39848094600493d8b4c18be48706401c1d544b2\"" Nov 1 00:19:22.099511 env[1586]: time="2025-11-01T00:19:22.099461706Z" level=info msg="StartContainer for \"fd7d8ee343d207c8ec46a9bee39848094600493d8b4c18be48706401c1d544b2\" returns successfully" Nov 1 00:19:22.104326 env[1586]: time="2025-11-01T00:19:22.103993103Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:19:22.107227 env[1586]: time="2025-11-01T00:19:22.107180381Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:19:22.108353 kubelet[2677]: E1101 00:19:22.108078 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:19:22.108353 kubelet[2677]: E1101 00:19:22.108139 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:19:22.108353 kubelet[2677]: E1101 00:19:22.108279 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7dtq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pw8c5_calico-system(1e69bd0a-b324-4064-9086-3d6aa0d23b51): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:19:22.109697 kubelet[2677]: E1101 00:19:22.109655 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pw8c5" podUID="1e69bd0a-b324-4064-9086-3d6aa0d23b51" Nov 1 00:19:22.353837 env[1586]: time="2025-11-01T00:19:22.353799200Z" level=info msg="StopPodSandbox for \"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24\"" Nov 1 00:19:22.451767 env[1586]: 2025-11-01 00:19:22.409 [INFO][4512] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" Nov 1 00:19:22.451767 env[1586]: 2025-11-01 00:19:22.409 [INFO][4512] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" iface="eth0" netns="/var/run/netns/cni-f03d7847-cc97-75f5-465b-4d211ddf07ae" Nov 1 00:19:22.451767 env[1586]: 2025-11-01 00:19:22.409 [INFO][4512] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" iface="eth0" netns="/var/run/netns/cni-f03d7847-cc97-75f5-465b-4d211ddf07ae" Nov 1 00:19:22.451767 env[1586]: 2025-11-01 00:19:22.409 [INFO][4512] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" iface="eth0" netns="/var/run/netns/cni-f03d7847-cc97-75f5-465b-4d211ddf07ae" Nov 1 00:19:22.451767 env[1586]: 2025-11-01 00:19:22.409 [INFO][4512] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" Nov 1 00:19:22.451767 env[1586]: 2025-11-01 00:19:22.409 [INFO][4512] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" Nov 1 00:19:22.451767 env[1586]: 2025-11-01 00:19:22.440 [INFO][4522] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" HandleID="k8s-pod-network.f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0" Nov 1 00:19:22.451767 env[1586]: 2025-11-01 00:19:22.440 [INFO][4522] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:22.451767 env[1586]: 2025-11-01 00:19:22.440 [INFO][4522] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:22.451767 env[1586]: 2025-11-01 00:19:22.447 [WARNING][4522] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" HandleID="k8s-pod-network.f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0" Nov 1 00:19:22.451767 env[1586]: 2025-11-01 00:19:22.447 [INFO][4522] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" HandleID="k8s-pod-network.f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0" Nov 1 00:19:22.451767 env[1586]: 2025-11-01 00:19:22.448 [INFO][4522] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:22.451767 env[1586]: 2025-11-01 00:19:22.450 [INFO][4512] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" Nov 1 00:19:22.452209 env[1586]: time="2025-11-01T00:19:22.451905648Z" level=info msg="TearDown network for sandbox \"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24\" successfully" Nov 1 00:19:22.452209 env[1586]: time="2025-11-01T00:19:22.451943328Z" level=info msg="StopPodSandbox for \"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24\" returns successfully" Nov 1 00:19:22.452545 env[1586]: time="2025-11-01T00:19:22.452513528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8dcbbd64-qwkg7,Uid:da0e9dac-d5af-4669-8132-3ec847bb81ba,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:19:22.534215 systemd[1]: run-netns-cni\x2df03d7847\x2dcc97\x2d75f5\x2d465b\x2d4d211ddf07ae.mount: Deactivated successfully. Nov 1 00:19:22.605024 systemd-networkd[1788]: caliada09114b1a: Link UP Nov 1 00:19:22.614028 systemd-networkd[1788]: caliada09114b1a: Gained carrier Nov 1 00:19:22.614310 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliada09114b1a: link becomes ready Nov 1 00:19:22.637740 kubelet[2677]: E1101 00:19:22.637655 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" podUID="57cd90f3-35a2-40bb-93fb-693c3ffcd73d" Nov 1 00:19:22.639065 kubelet[2677]: E1101 00:19:22.639027 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pw8c5" podUID="1e69bd0a-b324-4064-9086-3d6aa0d23b51" Nov 1 00:19:22.645996 kubelet[2677]: I1101 00:19:22.645948 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-87vvp" podStartSLOduration=46.645936786 podStartE2EDuration="46.645936786s" podCreationTimestamp="2025-11-01 00:18:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:19:22.645651266 +0000 UTC m=+52.455968340" watchObservedRunningTime="2025-11-01 00:19:22.645936786 +0000 UTC m=+52.456253860" Nov 1 00:19:22.660930 env[1586]: 2025-11-01 00:19:22.521 [INFO][4528] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0 calico-apiserver-6c8dcbbd64- calico-apiserver da0e9dac-d5af-4669-8132-3ec847bb81ba 992 0 2025-11-01 00:18:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c8dcbbd64 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-c51a7922c9 calico-apiserver-6c8dcbbd64-qwkg7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliada09114b1a [] [] }} ContainerID="3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba" Namespace="calico-apiserver" Pod="calico-apiserver-6c8dcbbd64-qwkg7" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-" Nov 1 00:19:22.660930 env[1586]: 2025-11-01 00:19:22.521 [INFO][4528] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba" Namespace="calico-apiserver" Pod="calico-apiserver-6c8dcbbd64-qwkg7" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0" Nov 1 00:19:22.660930 env[1586]: 2025-11-01 00:19:22.553 [INFO][4540] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba" HandleID="k8s-pod-network.3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0" Nov 1 00:19:22.660930 env[1586]: 2025-11-01 00:19:22.553 [INFO][4540] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba" HandleID="k8s-pod-network.3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b5d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-c51a7922c9", "pod":"calico-apiserver-6c8dcbbd64-qwkg7", "timestamp":"2025-11-01 00:19:22.553192374 +0000 UTC"}, Hostname:"ci-3510.3.8-n-c51a7922c9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:19:22.660930 env[1586]: 2025-11-01 00:19:22.553 [INFO][4540] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:22.660930 env[1586]: 2025-11-01 00:19:22.553 [INFO][4540] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:22.660930 env[1586]: 2025-11-01 00:19:22.553 [INFO][4540] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-c51a7922c9' Nov 1 00:19:22.660930 env[1586]: 2025-11-01 00:19:22.562 [INFO][4540] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:22.660930 env[1586]: 2025-11-01 00:19:22.565 [INFO][4540] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:22.660930 env[1586]: 2025-11-01 00:19:22.570 [INFO][4540] ipam/ipam.go 511: Trying affinity for 192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:22.660930 env[1586]: 2025-11-01 00:19:22.572 [INFO][4540] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:22.660930 env[1586]: 2025-11-01 00:19:22.573 [INFO][4540] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:22.660930 env[1586]: 2025-11-01 00:19:22.573 [INFO][4540] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.0/26 handle="k8s-pod-network.3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:22.660930 env[1586]: 2025-11-01 00:19:22.575 [INFO][4540] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba Nov 1 00:19:22.660930 env[1586]: 2025-11-01 00:19:22.587 [INFO][4540] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.0/26 handle="k8s-pod-network.3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:22.660930 env[1586]: 2025-11-01 00:19:22.598 [INFO][4540] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.5/26] block=192.168.15.0/26 handle="k8s-pod-network.3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:22.660930 env[1586]: 2025-11-01 00:19:22.598 [INFO][4540] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.5/26] handle="k8s-pod-network.3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:22.660930 env[1586]: 2025-11-01 00:19:22.598 [INFO][4540] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:22.660930 env[1586]: 2025-11-01 00:19:22.598 [INFO][4540] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.5/26] IPv6=[] ContainerID="3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba" HandleID="k8s-pod-network.3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0" Nov 1 00:19:22.661762 env[1586]: 2025-11-01 00:19:22.600 [INFO][4528] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba" Namespace="calico-apiserver" Pod="calico-apiserver-6c8dcbbd64-qwkg7" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0", GenerateName:"calico-apiserver-6c8dcbbd64-", Namespace:"calico-apiserver", SelfLink:"", UID:"da0e9dac-d5af-4669-8132-3ec847bb81ba", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8dcbbd64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"", Pod:"calico-apiserver-6c8dcbbd64-qwkg7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliada09114b1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:22.661762 env[1586]: 2025-11-01 00:19:22.600 [INFO][4528] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.5/32] ContainerID="3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba" Namespace="calico-apiserver" Pod="calico-apiserver-6c8dcbbd64-qwkg7" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0" Nov 1 00:19:22.661762 env[1586]: 2025-11-01 00:19:22.600 [INFO][4528] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliada09114b1a ContainerID="3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba" Namespace="calico-apiserver" Pod="calico-apiserver-6c8dcbbd64-qwkg7" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0" Nov 1 00:19:22.661762 env[1586]: 2025-11-01 00:19:22.614 [INFO][4528] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba" Namespace="calico-apiserver" Pod="calico-apiserver-6c8dcbbd64-qwkg7" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0" Nov 1 00:19:22.661762 env[1586]: 2025-11-01 00:19:22.615 [INFO][4528] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba" Namespace="calico-apiserver" Pod="calico-apiserver-6c8dcbbd64-qwkg7" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0", GenerateName:"calico-apiserver-6c8dcbbd64-", Namespace:"calico-apiserver", SelfLink:"", UID:"da0e9dac-d5af-4669-8132-3ec847bb81ba", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8dcbbd64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba", Pod:"calico-apiserver-6c8dcbbd64-qwkg7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliada09114b1a", MAC:"fa:04:8a:5b:f4:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:22.661762 env[1586]: 2025-11-01 00:19:22.651 [INFO][4528] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba" Namespace="calico-apiserver" Pod="calico-apiserver-6c8dcbbd64-qwkg7" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0" Nov 1 00:19:22.675000 audit[4556]: NETFILTER_CFG table=filter:117 family=2 entries=62 op=nft_register_chain pid=4556 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:19:22.675000 audit[4556]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=31772 a0=3 a1=ffffd6678830 a2=0 a3=ffffb0c4dfa8 items=0 ppid=3943 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:22.721107 kernel: audit: type=1325 audit(1761956362.675:430): table=filter:117 family=2 entries=62 op=nft_register_chain pid=4556 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:19:22.726379 kernel: audit: type=1300 audit(1761956362.675:430): arch=c00000b7 syscall=211 success=yes exit=31772 a0=3 a1=ffffd6678830 a2=0 a3=ffffb0c4dfa8 items=0 ppid=3943 pid=4556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:22.726462 kernel: audit: type=1327 audit(1761956362.675:430): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:19:22.675000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:19:22.726560 env[1586]: time="2025-11-01T00:19:22.705062743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:19:22.726560 env[1586]: time="2025-11-01T00:19:22.705100823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:19:22.726560 env[1586]: time="2025-11-01T00:19:22.705110663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:19:22.726560 env[1586]: time="2025-11-01T00:19:22.705235983Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba pid=4566 runtime=io.containerd.runc.v2 Nov 1 00:19:22.724000 audit[4557]: NETFILTER_CFG table=filter:118 family=2 entries=20 op=nft_register_rule pid=4557 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:19:22.758176 kernel: audit: type=1325 audit(1761956362.724:431): table=filter:118 family=2 entries=20 op=nft_register_rule pid=4557 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:19:22.724000 audit[4557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffefe2b0f0 a2=0 a3=1 items=0 ppid=2782 pid=4557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:22.724000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:19:22.767000 audit[4557]: NETFILTER_CFG table=nat:119 family=2 entries=14 op=nft_register_rule pid=4557 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:19:22.767000 audit[4557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffefe2b0f0 a2=0 a3=1 items=0 ppid=2782 pid=4557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:22.767000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:19:22.786562 systemd[1]: run-containerd-runc-k8s.io-3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba-runc.fn6hgE.mount: Deactivated successfully. Nov 1 00:19:22.795000 audit[4594]: NETFILTER_CFG table=filter:120 family=2 entries=17 op=nft_register_rule pid=4594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:19:22.795000 audit[4594]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdc9aa5b0 a2=0 a3=1 items=0 ppid=2782 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:22.795000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:19:22.797503 systemd-networkd[1788]: cali6e7a515feec: Gained IPv6LL Nov 1 00:19:22.801000 audit[4594]: NETFILTER_CFG table=nat:121 family=2 entries=35 op=nft_register_chain pid=4594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:19:22.801000 audit[4594]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffdc9aa5b0 a2=0 a3=1 items=0 ppid=2782 pid=4594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:22.801000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:19:22.821891 env[1586]: time="2025-11-01T00:19:22.821838337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8dcbbd64-qwkg7,Uid:da0e9dac-d5af-4669-8132-3ec847bb81ba,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba\"" Nov 1 00:19:22.823692 env[1586]: time="2025-11-01T00:19:22.823652056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:19:23.071351 env[1586]: time="2025-11-01T00:19:23.071273995Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:19:23.077031 env[1586]: time="2025-11-01T00:19:23.076984071Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:19:23.077765 kubelet[2677]: E1101 00:19:23.077268 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:19:23.077765 kubelet[2677]: E1101 00:19:23.077342 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:19:23.077765 kubelet[2677]: E1101 00:19:23.077697 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfrqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8dcbbd64-qwkg7_calico-apiserver(da0e9dac-d5af-4669-8132-3ec847bb81ba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:19:23.078940 kubelet[2677]: E1101 00:19:23.078867 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" podUID="da0e9dac-d5af-4669-8132-3ec847bb81ba" Nov 1 00:19:23.117435 systemd-networkd[1788]: cali22593b6a12a: Gained IPv6LL Nov 1 00:19:23.353157 env[1586]: time="2025-11-01T00:19:23.353030071Z" level=info msg="StopPodSandbox for \"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857\"" Nov 1 00:19:23.353690 env[1586]: time="2025-11-01T00:19:23.353666350Z" level=info msg="StopPodSandbox for \"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15\"" Nov 1 00:19:23.354160 env[1586]: time="2025-11-01T00:19:23.354138030Z" level=info msg="StopPodSandbox for \"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455\"" Nov 1 00:19:23.485442 env[1586]: 2025-11-01 00:19:23.430 [INFO][4629] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" Nov 1 00:19:23.485442 env[1586]: 2025-11-01 00:19:23.430 [INFO][4629] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" iface="eth0" netns="/var/run/netns/cni-7caac20e-6160-b6b0-1e46-31b0f0c45a26" Nov 1 00:19:23.485442 env[1586]: 2025-11-01 00:19:23.430 [INFO][4629] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" iface="eth0" netns="/var/run/netns/cni-7caac20e-6160-b6b0-1e46-31b0f0c45a26" Nov 1 00:19:23.485442 env[1586]: 2025-11-01 00:19:23.430 [INFO][4629] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" iface="eth0" netns="/var/run/netns/cni-7caac20e-6160-b6b0-1e46-31b0f0c45a26" Nov 1 00:19:23.485442 env[1586]: 2025-11-01 00:19:23.430 [INFO][4629] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" Nov 1 00:19:23.485442 env[1586]: 2025-11-01 00:19:23.430 [INFO][4629] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" Nov 1 00:19:23.485442 env[1586]: 2025-11-01 00:19:23.466 [INFO][4654] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" HandleID="k8s-pod-network.2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0" Nov 1 00:19:23.485442 env[1586]: 2025-11-01 00:19:23.470 [INFO][4654] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:23.485442 env[1586]: 2025-11-01 00:19:23.471 [INFO][4654] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:23.485442 env[1586]: 2025-11-01 00:19:23.480 [WARNING][4654] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" HandleID="k8s-pod-network.2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0" Nov 1 00:19:23.485442 env[1586]: 2025-11-01 00:19:23.480 [INFO][4654] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" HandleID="k8s-pod-network.2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0" Nov 1 00:19:23.485442 env[1586]: 2025-11-01 00:19:23.481 [INFO][4654] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:23.485442 env[1586]: 2025-11-01 00:19:23.484 [INFO][4629] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" Nov 1 00:19:23.486100 env[1586]: time="2025-11-01T00:19:23.486066174Z" level=info msg="TearDown network for sandbox \"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455\" successfully" Nov 1 00:19:23.486186 env[1586]: time="2025-11-01T00:19:23.486169734Z" level=info msg="StopPodSandbox for \"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455\" returns successfully" Nov 1 00:19:23.488711 systemd[1]: run-netns-cni\x2d7caac20e\x2d6160\x2db6b0\x2d1e46\x2d31b0f0c45a26.mount: Deactivated successfully. Nov 1 00:19:23.490513 env[1586]: time="2025-11-01T00:19:23.490474091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q8677,Uid:a4048da5-d286-44c8-9ec0-180e591b9eec,Namespace:kube-system,Attempt:1,}" Nov 1 00:19:23.509131 env[1586]: 2025-11-01 00:19:23.436 [INFO][4640] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" Nov 1 00:19:23.509131 env[1586]: 2025-11-01 00:19:23.436 [INFO][4640] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" iface="eth0" netns="/var/run/netns/cni-45d59e6d-92cc-8fc9-4bdd-cca1936ccd7c" Nov 1 00:19:23.509131 env[1586]: 2025-11-01 00:19:23.437 [INFO][4640] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" iface="eth0" netns="/var/run/netns/cni-45d59e6d-92cc-8fc9-4bdd-cca1936ccd7c" Nov 1 00:19:23.509131 env[1586]: 2025-11-01 00:19:23.439 [INFO][4640] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" iface="eth0" netns="/var/run/netns/cni-45d59e6d-92cc-8fc9-4bdd-cca1936ccd7c" Nov 1 00:19:23.509131 env[1586]: 2025-11-01 00:19:23.439 [INFO][4640] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" Nov 1 00:19:23.509131 env[1586]: 2025-11-01 00:19:23.439 [INFO][4640] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" Nov 1 00:19:23.509131 env[1586]: 2025-11-01 00:19:23.493 [INFO][4660] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" HandleID="k8s-pod-network.901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0" Nov 1 00:19:23.509131 env[1586]: 2025-11-01 00:19:23.493 [INFO][4660] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:23.509131 env[1586]: 2025-11-01 00:19:23.493 [INFO][4660] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:23.509131 env[1586]: 2025-11-01 00:19:23.502 [WARNING][4660] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" HandleID="k8s-pod-network.901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0" Nov 1 00:19:23.509131 env[1586]: 2025-11-01 00:19:23.502 [INFO][4660] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" HandleID="k8s-pod-network.901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0" Nov 1 00:19:23.509131 env[1586]: 2025-11-01 00:19:23.506 [INFO][4660] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:23.509131 env[1586]: 2025-11-01 00:19:23.508 [INFO][4640] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" Nov 1 00:19:23.509748 env[1586]: time="2025-11-01T00:19:23.509716877Z" level=info msg="TearDown network for sandbox \"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15\" successfully" Nov 1 00:19:23.509834 env[1586]: time="2025-11-01T00:19:23.509817477Z" level=info msg="StopPodSandbox for \"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15\" returns successfully" Nov 1 00:19:23.510530 env[1586]: time="2025-11-01T00:19:23.510504277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8dcbbd64-p85vf,Uid:ab7373cc-dd84-417d-8edc-59fbf979f4b4,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:19:23.526906 env[1586]: 2025-11-01 00:19:23.451 [INFO][4631] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" Nov 1 00:19:23.526906 env[1586]: 2025-11-01 00:19:23.451 [INFO][4631] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" iface="eth0" netns="/var/run/netns/cni-5dccec9e-7666-7303-cd1d-8eac648c7093" Nov 1 00:19:23.526906 env[1586]: 2025-11-01 00:19:23.451 [INFO][4631] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" iface="eth0" netns="/var/run/netns/cni-5dccec9e-7666-7303-cd1d-8eac648c7093" Nov 1 00:19:23.526906 env[1586]: 2025-11-01 00:19:23.451 [INFO][4631] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" iface="eth0" netns="/var/run/netns/cni-5dccec9e-7666-7303-cd1d-8eac648c7093" Nov 1 00:19:23.526906 env[1586]: 2025-11-01 00:19:23.451 [INFO][4631] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" Nov 1 00:19:23.526906 env[1586]: 2025-11-01 00:19:23.452 [INFO][4631] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" Nov 1 00:19:23.526906 env[1586]: 2025-11-01 00:19:23.514 [INFO][4666] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" HandleID="k8s-pod-network.28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" Workload="ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0" Nov 1 00:19:23.526906 env[1586]: 2025-11-01 00:19:23.514 [INFO][4666] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:23.526906 env[1586]: 2025-11-01 00:19:23.514 [INFO][4666] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:23.526906 env[1586]: 2025-11-01 00:19:23.522 [WARNING][4666] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" HandleID="k8s-pod-network.28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" Workload="ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0" Nov 1 00:19:23.526906 env[1586]: 2025-11-01 00:19:23.522 [INFO][4666] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" HandleID="k8s-pod-network.28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" Workload="ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0" Nov 1 00:19:23.526906 env[1586]: 2025-11-01 00:19:23.523 [INFO][4666] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:23.526906 env[1586]: 2025-11-01 00:19:23.525 [INFO][4631] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" Nov 1 00:19:23.532532 env[1586]: time="2025-11-01T00:19:23.527198825Z" level=info msg="TearDown network for sandbox \"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857\" successfully" Nov 1 00:19:23.532532 env[1586]: time="2025-11-01T00:19:23.527228945Z" level=info msg="StopPodSandbox for \"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857\" returns successfully" Nov 1 00:19:23.532532 env[1586]: time="2025-11-01T00:19:23.528998343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4mt97,Uid:8e50a05e-0803-4e20-bd2b-ccf8c9d67c23,Namespace:calico-system,Attempt:1,}" Nov 1 00:19:23.532644 systemd[1]: run-netns-cni\x2d45d59e6d\x2d92cc\x2d8fc9\x2d4bdd\x2dcca1936ccd7c.mount: Deactivated successfully. Nov 1 00:19:23.532771 systemd[1]: run-netns-cni\x2d5dccec9e\x2d7666\x2d7303\x2dcd1d\x2d8eac648c7093.mount: Deactivated successfully. Nov 1 00:19:23.647686 kubelet[2677]: E1101 00:19:23.641595 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" podUID="da0e9dac-d5af-4669-8132-3ec847bb81ba" Nov 1 00:19:23.647686 kubelet[2677]: E1101 00:19:23.643557 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pw8c5" podUID="1e69bd0a-b324-4064-9086-3d6aa0d23b51" Nov 1 00:19:23.676000 audit[4722]: NETFILTER_CFG table=filter:122 family=2 entries=14 op=nft_register_rule pid=4722 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:19:23.676000 audit[4722]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff25912c0 a2=0 a3=1 items=0 ppid=2782 pid=4722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:23.676000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:19:23.688000 audit[4722]: NETFILTER_CFG table=nat:123 family=2 entries=20 op=nft_register_rule pid=4722 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:19:23.688000 audit[4722]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff25912c0 a2=0 a3=1 items=0 ppid=2782 pid=4722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:23.688000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:19:23.734597 systemd-networkd[1788]: calif1daaeff60d: Link UP Nov 1 00:19:23.752719 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:19:23.755182 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif1daaeff60d: link becomes ready Nov 1 00:19:23.761939 systemd-networkd[1788]: calif1daaeff60d: Gained carrier Nov 1 00:19:23.792017 env[1586]: 2025-11-01 00:19:23.616 [INFO][4688] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0 calico-apiserver-6c8dcbbd64- calico-apiserver ab7373cc-dd84-417d-8edc-59fbf979f4b4 1022 0 2025-11-01 00:18:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c8dcbbd64 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.8-n-c51a7922c9 calico-apiserver-6c8dcbbd64-p85vf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif1daaeff60d [] [] }} ContainerID="7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8" Namespace="calico-apiserver" Pod="calico-apiserver-6c8dcbbd64-p85vf" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-" Nov 1 00:19:23.792017 env[1586]: 2025-11-01 00:19:23.616 [INFO][4688] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8" Namespace="calico-apiserver" Pod="calico-apiserver-6c8dcbbd64-p85vf" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0" Nov 1 00:19:23.792017 env[1586]: 2025-11-01 00:19:23.672 [INFO][4713] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8" HandleID="k8s-pod-network.7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0" Nov 1 00:19:23.792017 env[1586]: 2025-11-01 00:19:23.672 [INFO][4713] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8" HandleID="k8s-pod-network.7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb0b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.8-n-c51a7922c9", "pod":"calico-apiserver-6c8dcbbd64-p85vf", "timestamp":"2025-11-01 00:19:23.672684639 +0000 UTC"}, Hostname:"ci-3510.3.8-n-c51a7922c9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:19:23.792017 env[1586]: 2025-11-01 00:19:23.673 [INFO][4713] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:23.792017 env[1586]: 2025-11-01 00:19:23.673 [INFO][4713] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:23.792017 env[1586]: 2025-11-01 00:19:23.673 [INFO][4713] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-c51a7922c9' Nov 1 00:19:23.792017 env[1586]: 2025-11-01 00:19:23.686 [INFO][4713] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:23.792017 env[1586]: 2025-11-01 00:19:23.691 [INFO][4713] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:23.792017 env[1586]: 2025-11-01 00:19:23.696 [INFO][4713] ipam/ipam.go 511: Trying affinity for 192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:23.792017 env[1586]: 2025-11-01 00:19:23.698 [INFO][4713] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:23.792017 env[1586]: 2025-11-01 00:19:23.700 [INFO][4713] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:23.792017 env[1586]: 2025-11-01 00:19:23.700 [INFO][4713] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.0/26 handle="k8s-pod-network.7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:23.792017 env[1586]: 2025-11-01 00:19:23.702 [INFO][4713] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8 Nov 1 00:19:23.792017 env[1586]: 2025-11-01 00:19:23.709 [INFO][4713] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.0/26 handle="k8s-pod-network.7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:23.792017 env[1586]: 2025-11-01 00:19:23.727 [INFO][4713] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.6/26] block=192.168.15.0/26 handle="k8s-pod-network.7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:23.792017 env[1586]: 2025-11-01 00:19:23.727 [INFO][4713] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.6/26] handle="k8s-pod-network.7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:23.792017 env[1586]: 2025-11-01 00:19:23.727 [INFO][4713] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:23.792017 env[1586]: 2025-11-01 00:19:23.727 [INFO][4713] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.6/26] IPv6=[] ContainerID="7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8" HandleID="k8s-pod-network.7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0" Nov 1 00:19:23.792791 env[1586]: 2025-11-01 00:19:23.732 [INFO][4688] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8" Namespace="calico-apiserver" Pod="calico-apiserver-6c8dcbbd64-p85vf" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0", GenerateName:"calico-apiserver-6c8dcbbd64-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab7373cc-dd84-417d-8edc-59fbf979f4b4", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8dcbbd64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"", Pod:"calico-apiserver-6c8dcbbd64-p85vf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1daaeff60d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:23.792791 env[1586]: 2025-11-01 00:19:23.732 [INFO][4688] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.6/32] ContainerID="7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8" Namespace="calico-apiserver" Pod="calico-apiserver-6c8dcbbd64-p85vf" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0" Nov 1 00:19:23.792791 env[1586]: 2025-11-01 00:19:23.732 [INFO][4688] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1daaeff60d ContainerID="7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8" Namespace="calico-apiserver" Pod="calico-apiserver-6c8dcbbd64-p85vf" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0" Nov 1 00:19:23.792791 env[1586]: 2025-11-01 00:19:23.764 [INFO][4688] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8" Namespace="calico-apiserver" Pod="calico-apiserver-6c8dcbbd64-p85vf" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0" Nov 1 00:19:23.792791 env[1586]: 2025-11-01 00:19:23.765 [INFO][4688] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8" Namespace="calico-apiserver" Pod="calico-apiserver-6c8dcbbd64-p85vf" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0", GenerateName:"calico-apiserver-6c8dcbbd64-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab7373cc-dd84-417d-8edc-59fbf979f4b4", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8dcbbd64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8", Pod:"calico-apiserver-6c8dcbbd64-p85vf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1daaeff60d", MAC:"4e:2e:e7:5f:ab:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:23.792791 env[1586]: 2025-11-01 00:19:23.788 [INFO][4688] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8" Namespace="calico-apiserver" Pod="calico-apiserver-6c8dcbbd64-p85vf" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0" Nov 1 00:19:23.807000 audit[4745]: NETFILTER_CFG table=filter:124 family=2 entries=53 op=nft_register_chain pid=4745 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:19:23.807000 audit[4745]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26640 a0=3 a1=fffff06423b0 a2=0 a3=ffff91ebefa8 items=0 ppid=3943 pid=4745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:23.807000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:19:23.817807 env[1586]: time="2025-11-01T00:19:23.816544535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:19:23.817807 env[1586]: time="2025-11-01T00:19:23.816647535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:19:23.817807 env[1586]: time="2025-11-01T00:19:23.816679575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:19:23.817807 env[1586]: time="2025-11-01T00:19:23.816844055Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8 pid=4752 runtime=io.containerd.runc.v2 Nov 1 00:19:23.874857 systemd-networkd[1788]: califc14c83cef6: Link UP Nov 1 00:19:23.894579 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califc14c83cef6: link becomes ready Nov 1 00:19:23.894421 systemd-networkd[1788]: califc14c83cef6: Gained carrier Nov 1 00:19:23.926795 env[1586]: 2025-11-01 00:19:23.632 [INFO][4675] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0 coredns-668d6bf9bc- kube-system a4048da5-d286-44c8-9ec0-180e591b9eec 1021 0 2025-11-01 00:18:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.8-n-c51a7922c9 coredns-668d6bf9bc-q8677 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califc14c83cef6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01" Namespace="kube-system" Pod="coredns-668d6bf9bc-q8677" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-" Nov 1 00:19:23.926795 env[1586]: 2025-11-01 00:19:23.632 [INFO][4675] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01" Namespace="kube-system" Pod="coredns-668d6bf9bc-q8677" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0" Nov 1 00:19:23.926795 env[1586]: 2025-11-01 00:19:23.781 [INFO][4719] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01" HandleID="k8s-pod-network.6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0" Nov 1 00:19:23.926795 env[1586]: 2025-11-01 00:19:23.782 [INFO][4719] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01" HandleID="k8s-pod-network.6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c180), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.8-n-c51a7922c9", "pod":"coredns-668d6bf9bc-q8677", "timestamp":"2025-11-01 00:19:23.78184392 +0000 UTC"}, Hostname:"ci-3510.3.8-n-c51a7922c9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:19:23.926795 env[1586]: 2025-11-01 00:19:23.782 [INFO][4719] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:23.926795 env[1586]: 2025-11-01 00:19:23.782 [INFO][4719] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:23.926795 env[1586]: 2025-11-01 00:19:23.782 [INFO][4719] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-c51a7922c9' Nov 1 00:19:23.926795 env[1586]: 2025-11-01 00:19:23.802 [INFO][4719] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:23.926795 env[1586]: 2025-11-01 00:19:23.824 [INFO][4719] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:23.926795 env[1586]: 2025-11-01 00:19:23.830 [INFO][4719] ipam/ipam.go 511: Trying affinity for 192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:23.926795 env[1586]: 2025-11-01 00:19:23.833 [INFO][4719] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:23.926795 env[1586]: 2025-11-01 00:19:23.836 [INFO][4719] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:23.926795 env[1586]: 2025-11-01 00:19:23.836 [INFO][4719] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.0/26 handle="k8s-pod-network.6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:23.926795 env[1586]: 2025-11-01 00:19:23.840 [INFO][4719] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01 Nov 1 00:19:23.926795 env[1586]: 2025-11-01 00:19:23.846 [INFO][4719] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.0/26 handle="k8s-pod-network.6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:23.926795 env[1586]: 2025-11-01 00:19:23.859 [INFO][4719] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.7/26] block=192.168.15.0/26 handle="k8s-pod-network.6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:23.926795 env[1586]: 2025-11-01 00:19:23.859 [INFO][4719] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.7/26] handle="k8s-pod-network.6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:23.926795 env[1586]: 2025-11-01 00:19:23.859 [INFO][4719] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:23.926795 env[1586]: 2025-11-01 00:19:23.859 [INFO][4719] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.7/26] IPv6=[] ContainerID="6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01" HandleID="k8s-pod-network.6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0" Nov 1 00:19:23.927348 env[1586]: 2025-11-01 00:19:23.864 [INFO][4675] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01" Namespace="kube-system" Pod="coredns-668d6bf9bc-q8677" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a4048da5-d286-44c8-9ec0-180e591b9eec", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"", Pod:"coredns-668d6bf9bc-q8677", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc14c83cef6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:23.927348 env[1586]: 2025-11-01 00:19:23.865 [INFO][4675] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.7/32] ContainerID="6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01" Namespace="kube-system" Pod="coredns-668d6bf9bc-q8677" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0" Nov 1 00:19:23.927348 env[1586]: 2025-11-01 00:19:23.865 [INFO][4675] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc14c83cef6 ContainerID="6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01" Namespace="kube-system" Pod="coredns-668d6bf9bc-q8677" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0" Nov 1 00:19:23.927348 env[1586]: 2025-11-01 00:19:23.894 [INFO][4675] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01" Namespace="kube-system" Pod="coredns-668d6bf9bc-q8677" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0" Nov 1 00:19:23.927348 env[1586]: 2025-11-01 00:19:23.897 [INFO][4675] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01" Namespace="kube-system" Pod="coredns-668d6bf9bc-q8677" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a4048da5-d286-44c8-9ec0-180e591b9eec", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01", Pod:"coredns-668d6bf9bc-q8677", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc14c83cef6", MAC:"d6:be:f2:2a:43:b0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:23.927348 env[1586]: 2025-11-01 00:19:23.923 [INFO][4675] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01" Namespace="kube-system" Pod="coredns-668d6bf9bc-q8677" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0" Nov 1 00:19:23.945074 env[1586]: time="2025-11-01T00:19:23.945007722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c8dcbbd64-p85vf,Uid:ab7373cc-dd84-417d-8edc-59fbf979f4b4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8\"" Nov 1 00:19:23.951005 env[1586]: time="2025-11-01T00:19:23.950961038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:19:23.962664 env[1586]: time="2025-11-01T00:19:23.962534309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:19:23.966723 env[1586]: time="2025-11-01T00:19:23.962624069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:19:23.966818 env[1586]: time="2025-11-01T00:19:23.966648906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:19:23.968000 audit[4803]: NETFILTER_CFG table=filter:125 family=2 entries=58 op=nft_register_chain pid=4803 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:19:23.968000 audit[4803]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26760 a0=3 a1=ffffda2f50b0 a2=0 a3=ffffa7732fa8 items=0 ppid=3943 pid=4803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:23.968000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:19:23.975837 systemd-networkd[1788]: cali8adfc9d5d37: Link UP Nov 1 00:19:23.980339 env[1586]: time="2025-11-01T00:19:23.979323377Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01 pid=4801 runtime=io.containerd.runc.v2 Nov 1 00:19:23.988696 systemd-networkd[1788]: cali8adfc9d5d37: Gained carrier Nov 1 00:19:23.989371 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8adfc9d5d37: link becomes ready Nov 1 00:19:24.011937 env[1586]: 2025-11-01 00:19:23.731 [INFO][4700] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0 csi-node-driver- calico-system 8e50a05e-0803-4e20-bd2b-ccf8c9d67c23 1023 0 2025-11-01 00:18:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510.3.8-n-c51a7922c9 csi-node-driver-4mt97 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8adfc9d5d37 [] [] }} ContainerID="206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5" Namespace="calico-system" Pod="csi-node-driver-4mt97" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-" Nov 1 00:19:24.011937 env[1586]: 2025-11-01 00:19:23.731 [INFO][4700] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5" Namespace="calico-system" Pod="csi-node-driver-4mt97" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0" Nov 1 00:19:24.011937 env[1586]: 2025-11-01 00:19:23.864 [INFO][4733] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5" HandleID="k8s-pod-network.206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5" Workload="ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0" Nov 1 00:19:24.011937 env[1586]: 2025-11-01 00:19:23.864 [INFO][4733] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5" HandleID="k8s-pod-network.206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5" Workload="ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cafe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.8-n-c51a7922c9", "pod":"csi-node-driver-4mt97", "timestamp":"2025-11-01 00:19:23.86443166 +0000 UTC"}, Hostname:"ci-3510.3.8-n-c51a7922c9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:19:24.011937 env[1586]: 2025-11-01 00:19:23.864 [INFO][4733] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:24.011937 env[1586]: 2025-11-01 00:19:23.864 [INFO][4733] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:24.011937 env[1586]: 2025-11-01 00:19:23.864 [INFO][4733] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.8-n-c51a7922c9' Nov 1 00:19:24.011937 env[1586]: 2025-11-01 00:19:23.902 [INFO][4733] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:24.011937 env[1586]: 2025-11-01 00:19:23.920 [INFO][4733] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:24.011937 env[1586]: 2025-11-01 00:19:23.929 [INFO][4733] ipam/ipam.go 511: Trying affinity for 192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:24.011937 env[1586]: 2025-11-01 00:19:23.931 [INFO][4733] ipam/ipam.go 158: Attempting to load block cidr=192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:24.011937 env[1586]: 2025-11-01 00:19:23.934 [INFO][4733] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.15.0/26 host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:24.011937 env[1586]: 2025-11-01 00:19:23.934 [INFO][4733] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.15.0/26 handle="k8s-pod-network.206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:24.011937 env[1586]: 2025-11-01 00:19:23.938 [INFO][4733] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5 Nov 1 00:19:24.011937 env[1586]: 2025-11-01 00:19:23.947 [INFO][4733] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.15.0/26 handle="k8s-pod-network.206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:24.011937 env[1586]: 2025-11-01 00:19:23.961 [INFO][4733] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.15.8/26] block=192.168.15.0/26 handle="k8s-pod-network.206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:24.011937 env[1586]: 2025-11-01 00:19:23.961 [INFO][4733] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.15.8/26] handle="k8s-pod-network.206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5" host="ci-3510.3.8-n-c51a7922c9" Nov 1 00:19:24.011937 env[1586]: 2025-11-01 00:19:23.962 [INFO][4733] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:24.011937 env[1586]: 2025-11-01 00:19:23.962 [INFO][4733] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.15.8/26] IPv6=[] ContainerID="206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5" HandleID="k8s-pod-network.206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5" Workload="ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0" Nov 1 00:19:24.012561 env[1586]: 2025-11-01 00:19:23.965 [INFO][4700] cni-plugin/k8s.go 418: Populated endpoint ContainerID="206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5" Namespace="calico-system" Pod="csi-node-driver-4mt97" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e50a05e-0803-4e20-bd2b-ccf8c9d67c23", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"", Pod:"csi-node-driver-4mt97", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8adfc9d5d37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:24.012561 env[1586]: 2025-11-01 00:19:23.965 [INFO][4700] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.15.8/32] ContainerID="206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5" Namespace="calico-system" Pod="csi-node-driver-4mt97" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0" Nov 1 00:19:24.012561 env[1586]: 2025-11-01 00:19:23.965 [INFO][4700] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8adfc9d5d37 ContainerID="206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5" Namespace="calico-system" Pod="csi-node-driver-4mt97" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0" Nov 1 00:19:24.012561 env[1586]: 2025-11-01 00:19:23.989 [INFO][4700] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5" Namespace="calico-system" Pod="csi-node-driver-4mt97" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0" Nov 1 00:19:24.012561 env[1586]: 2025-11-01 00:19:23.989 [INFO][4700] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5" Namespace="calico-system" Pod="csi-node-driver-4mt97" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e50a05e-0803-4e20-bd2b-ccf8c9d67c23", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5", Pod:"csi-node-driver-4mt97", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8adfc9d5d37", MAC:"1a:6a:3a:1b:77:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:24.012561 env[1586]: 2025-11-01 00:19:24.009 [INFO][4700] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5" Namespace="calico-system" Pod="csi-node-driver-4mt97" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0" Nov 1 00:19:24.031000 audit[4847]: NETFILTER_CFG table=filter:126 family=2 entries=56 op=nft_register_chain pid=4847 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:19:24.031000 audit[4847]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25500 a0=3 a1=fffffef11020 a2=0 a3=ffffb7419fa8 items=0 ppid=3943 pid=4847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:24.031000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:19:24.037261 env[1586]: time="2025-11-01T00:19:24.037204415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:19:24.037710 env[1586]: time="2025-11-01T00:19:24.037677815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:19:24.037859 env[1586]: time="2025-11-01T00:19:24.037803775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:19:24.038057 env[1586]: time="2025-11-01T00:19:24.038020375Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5 pid=4846 runtime=io.containerd.runc.v2 Nov 1 00:19:24.046125 env[1586]: time="2025-11-01T00:19:24.046093089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-q8677,Uid:a4048da5-d286-44c8-9ec0-180e591b9eec,Namespace:kube-system,Attempt:1,} returns sandbox id \"6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01\"" Nov 1 00:19:24.054948 env[1586]: time="2025-11-01T00:19:24.054898323Z" level=info msg="CreateContainer within sandbox \"6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:19:24.086169 env[1586]: time="2025-11-01T00:19:24.086121180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4mt97,Uid:8e50a05e-0803-4e20-bd2b-ccf8c9d67c23,Namespace:calico-system,Attempt:1,} returns sandbox id \"206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5\"" Nov 1 00:19:24.102875 env[1586]: time="2025-11-01T00:19:24.102824568Z" level=info msg="CreateContainer within sandbox \"6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1d5a06ae03ab445ff1b6aa10a1bb6d3b7ddd89aaed008faf415e0670b254d517\"" Nov 1 00:19:24.104821 env[1586]: time="2025-11-01T00:19:24.104757807Z" level=info msg="StartContainer for \"1d5a06ae03ab445ff1b6aa10a1bb6d3b7ddd89aaed008faf415e0670b254d517\"" Nov 1 00:19:24.149317 env[1586]: time="2025-11-01T00:19:24.147980336Z" level=info msg="StartContainer for \"1d5a06ae03ab445ff1b6aa10a1bb6d3b7ddd89aaed008faf415e0670b254d517\" returns successfully" Nov 1 00:19:24.198430 env[1586]: time="2025-11-01T00:19:24.198236620Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:19:24.205516 systemd-networkd[1788]: caliada09114b1a: Gained IPv6LL Nov 1 00:19:24.211122 env[1586]: time="2025-11-01T00:19:24.211060171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:19:24.211869 kubelet[2677]: E1101 00:19:24.211360 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:19:24.211869 kubelet[2677]: E1101 00:19:24.211408 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:19:24.211869 kubelet[2677]: E1101 00:19:24.211606 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kh9cr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8dcbbd64-p85vf_calico-apiserver(ab7373cc-dd84-417d-8edc-59fbf979f4b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:19:24.212380 env[1586]: time="2025-11-01T00:19:24.212357290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:19:24.213409 kubelet[2677]: E1101 00:19:24.213352 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" podUID="ab7373cc-dd84-417d-8edc-59fbf979f4b4" Nov 1 00:19:24.444528 env[1586]: time="2025-11-01T00:19:24.444474324Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:19:24.448972 env[1586]: time="2025-11-01T00:19:24.448875601Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:19:24.449430 kubelet[2677]: E1101 00:19:24.449394 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:19:24.449563 kubelet[2677]: E1101 00:19:24.449546 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:19:24.450778 kubelet[2677]: E1101 00:19:24.450732 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gnml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4mt97_calico-system(8e50a05e-0803-4e20-bd2b-ccf8c9d67c23): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:19:24.452758 env[1586]: time="2025-11-01T00:19:24.452724078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:19:24.625319 kubelet[2677]: I1101 00:19:24.625251 2677 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:19:24.644879 systemd[1]: run-containerd-runc-k8s.io-db6f76a2b7ab6ed2fb0e97e0b262e36f004b1750c5d73343a16030568d48efc1-runc.aJjcQi.mount: Deactivated successfully. Nov 1 00:19:24.649934 kubelet[2677]: E1101 00:19:24.649829 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" podUID="ab7373cc-dd84-417d-8edc-59fbf979f4b4" Nov 1 00:19:24.662524 kubelet[2677]: E1101 00:19:24.662443 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" podUID="da0e9dac-d5af-4669-8132-3ec847bb81ba" Nov 1 00:19:24.695000 audit[4943]: NETFILTER_CFG table=filter:127 family=2 entries=14 op=nft_register_rule pid=4943 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:19:24.697753 env[1586]: time="2025-11-01T00:19:24.697711342Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:19:24.695000 audit[4943]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe0fd0f40 a2=0 a3=1 items=0 ppid=2782 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:24.695000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:19:24.700631 env[1586]: time="2025-11-01T00:19:24.700565420Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:19:24.700967 kubelet[2677]: E1101 00:19:24.700932 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:19:24.701085 kubelet[2677]: E1101 00:19:24.701067 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:19:24.701332 kubelet[2677]: E1101 00:19:24.701271 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gnml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4mt97_calico-system(8e50a05e-0803-4e20-bd2b-ccf8c9d67c23): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:19:24.704271 kubelet[2677]: E1101 00:19:24.704226 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:19:24.703000 audit[4943]: NETFILTER_CFG table=nat:128 family=2 entries=20 op=nft_register_rule pid=4943 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:19:24.703000 audit[4943]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe0fd0f40 a2=0 a3=1 items=0 ppid=2782 pid=4943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:24.703000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:19:24.791486 systemd[1]: run-containerd-runc-k8s.io-db6f76a2b7ab6ed2fb0e97e0b262e36f004b1750c5d73343a16030568d48efc1-runc.Ojyn9L.mount: Deactivated successfully. Nov 1 00:19:24.798408 kubelet[2677]: I1101 00:19:24.795996 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-q8677" podStartSLOduration=48.795977512 podStartE2EDuration="48.795977512s" podCreationTimestamp="2025-11-01 00:18:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:19:24.721107926 +0000 UTC m=+54.531425000" watchObservedRunningTime="2025-11-01 00:19:24.795977512 +0000 UTC m=+54.606294626" Nov 1 00:19:25.549423 systemd-networkd[1788]: cali8adfc9d5d37: Gained IPv6LL Nov 1 00:19:25.613450 systemd-networkd[1788]: calif1daaeff60d: Gained IPv6LL Nov 1 00:19:25.664772 kubelet[2677]: E1101 00:19:25.664731 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" podUID="ab7373cc-dd84-417d-8edc-59fbf979f4b4" Nov 1 00:19:25.665951 kubelet[2677]: E1101 00:19:25.665914 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:19:25.731000 audit[4968]: NETFILTER_CFG table=filter:129 family=2 entries=14 op=nft_register_rule pid=4968 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:19:25.731000 audit[4968]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffef60df10 a2=0 a3=1 items=0 ppid=2782 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:25.731000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:19:25.736000 audit[4968]: NETFILTER_CFG table=nat:130 family=2 entries=44 op=nft_register_rule pid=4968 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:19:25.736000 audit[4968]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffef60df10 a2=0 a3=1 items=0 ppid=2782 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:25.736000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:19:25.933422 systemd-networkd[1788]: califc14c83cef6: Gained IPv6LL Nov 1 00:19:26.750000 audit[4970]: NETFILTER_CFG table=filter:131 family=2 entries=14 op=nft_register_rule pid=4970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:19:26.756682 kernel: kauditd_printk_skb: 38 callbacks suppressed Nov 1 00:19:26.756767 kernel: audit: type=1325 audit(1761956366.750:444): table=filter:131 family=2 entries=14 op=nft_register_rule pid=4970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:19:26.750000 audit[4970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffee183990 a2=0 a3=1 items=0 ppid=2782 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:26.798580 kernel: audit: type=1300 audit(1761956366.750:444): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffee183990 a2=0 a3=1 items=0 ppid=2782 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:26.798742 kernel: audit: type=1327 audit(1761956366.750:444): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:19:26.750000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:19:26.815000 audit[4970]: NETFILTER_CFG table=nat:132 family=2 entries=56 op=nft_register_chain pid=4970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:19:26.815000 audit[4970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffee183990 a2=0 a3=1 items=0 ppid=2782 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:26.858419 kernel: audit: type=1325 audit(1761956366.815:445): table=nat:132 family=2 entries=56 op=nft_register_chain pid=4970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:19:26.858547 kernel: audit: type=1300 audit(1761956366.815:445): arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffee183990 a2=0 a3=1 items=0 ppid=2782 pid=4970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:19:26.858585 kernel: audit: type=1327 audit(1761956366.815:445): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:19:26.815000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:19:30.589708 env[1586]: time="2025-11-01T00:19:30.589394940Z" level=info msg="StopPodSandbox for \"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696\"" Nov 1 00:19:30.724418 env[1586]: 2025-11-01 00:19:30.669 [WARNING][4990] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1e69bd0a-b324-4064-9086-3d6aa0d23b51", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e", Pod:"goldmane-666569f655-pw8c5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6e7a515feec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:30.724418 env[1586]: 2025-11-01 00:19:30.669 [INFO][4990] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" Nov 1 00:19:30.724418 env[1586]: 2025-11-01 00:19:30.669 [INFO][4990] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" iface="eth0" netns="" Nov 1 00:19:30.724418 env[1586]: 2025-11-01 00:19:30.669 [INFO][4990] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" Nov 1 00:19:30.724418 env[1586]: 2025-11-01 00:19:30.669 [INFO][4990] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" Nov 1 00:19:30.724418 env[1586]: 2025-11-01 00:19:30.711 [INFO][4997] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" HandleID="k8s-pod-network.253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" Workload="ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0" Nov 1 00:19:30.724418 env[1586]: 2025-11-01 00:19:30.711 [INFO][4997] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:30.724418 env[1586]: 2025-11-01 00:19:30.711 [INFO][4997] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:30.724418 env[1586]: 2025-11-01 00:19:30.719 [WARNING][4997] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" HandleID="k8s-pod-network.253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" Workload="ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0" Nov 1 00:19:30.724418 env[1586]: 2025-11-01 00:19:30.719 [INFO][4997] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" HandleID="k8s-pod-network.253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" Workload="ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0" Nov 1 00:19:30.724418 env[1586]: 2025-11-01 00:19:30.721 [INFO][4997] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:30.724418 env[1586]: 2025-11-01 00:19:30.722 [INFO][4990] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" Nov 1 00:19:30.724851 env[1586]: time="2025-11-01T00:19:30.724444489Z" level=info msg="TearDown network for sandbox \"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696\" successfully" Nov 1 00:19:30.724851 env[1586]: time="2025-11-01T00:19:30.724474769Z" level=info msg="StopPodSandbox for \"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696\" returns successfully" Nov 1 00:19:30.727524 env[1586]: time="2025-11-01T00:19:30.727496927Z" level=info msg="RemovePodSandbox for \"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696\"" Nov 1 00:19:30.727671 env[1586]: time="2025-11-01T00:19:30.727632767Z" level=info msg="Forcibly stopping sandbox \"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696\"" Nov 1 00:19:30.842418 env[1586]: 2025-11-01 00:19:30.802 [WARNING][5012] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1e69bd0a-b324-4064-9086-3d6aa0d23b51", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"14a702efb8984a3ff9d792daeee5f9f93ad15092784b45615d773a1055f3e82e", Pod:"goldmane-666569f655-pw8c5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.15.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6e7a515feec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:30.842418 env[1586]: 2025-11-01 00:19:30.802 [INFO][5012] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" Nov 1 00:19:30.842418 env[1586]: 2025-11-01 00:19:30.802 [INFO][5012] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" iface="eth0" netns="" Nov 1 00:19:30.842418 env[1586]: 2025-11-01 00:19:30.802 [INFO][5012] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" Nov 1 00:19:30.842418 env[1586]: 2025-11-01 00:19:30.802 [INFO][5012] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" Nov 1 00:19:30.842418 env[1586]: 2025-11-01 00:19:30.823 [INFO][5019] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" HandleID="k8s-pod-network.253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" Workload="ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0" Nov 1 00:19:30.842418 env[1586]: 2025-11-01 00:19:30.824 [INFO][5019] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:30.842418 env[1586]: 2025-11-01 00:19:30.824 [INFO][5019] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:30.842418 env[1586]: 2025-11-01 00:19:30.838 [WARNING][5019] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" HandleID="k8s-pod-network.253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" Workload="ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0" Nov 1 00:19:30.842418 env[1586]: 2025-11-01 00:19:30.838 [INFO][5019] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" HandleID="k8s-pod-network.253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" Workload="ci--3510.3.8--n--c51a7922c9-k8s-goldmane--666569f655--pw8c5-eth0" Nov 1 00:19:30.842418 env[1586]: 2025-11-01 00:19:30.839 [INFO][5019] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:30.842418 env[1586]: 2025-11-01 00:19:30.840 [INFO][5012] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696" Nov 1 00:19:30.842924 env[1586]: time="2025-11-01T00:19:30.842879569Z" level=info msg="TearDown network for sandbox \"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696\" successfully" Nov 1 00:19:30.857601 env[1586]: time="2025-11-01T00:19:30.857562920Z" level=info msg="RemovePodSandbox \"253c6a74a71b754d8ffd3667f0b0a5bace0dab9207d96d6ba7297820e38d2696\" returns successfully" Nov 1 00:19:30.858261 env[1586]: time="2025-11-01T00:19:30.858239839Z" level=info msg="StopPodSandbox for \"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15\"" Nov 1 00:19:30.935450 env[1586]: 2025-11-01 00:19:30.902 [WARNING][5033] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0", GenerateName:"calico-apiserver-6c8dcbbd64-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab7373cc-dd84-417d-8edc-59fbf979f4b4", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8dcbbd64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8", Pod:"calico-apiserver-6c8dcbbd64-p85vf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1daaeff60d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:30.935450 env[1586]: 2025-11-01 00:19:30.903 [INFO][5033] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" Nov 1 00:19:30.935450 env[1586]: 2025-11-01 00:19:30.903 [INFO][5033] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" iface="eth0" netns="" Nov 1 00:19:30.935450 env[1586]: 2025-11-01 00:19:30.903 [INFO][5033] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" Nov 1 00:19:30.935450 env[1586]: 2025-11-01 00:19:30.903 [INFO][5033] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" Nov 1 00:19:30.935450 env[1586]: 2025-11-01 00:19:30.919 [INFO][5040] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" HandleID="k8s-pod-network.901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0" Nov 1 00:19:30.935450 env[1586]: 2025-11-01 00:19:30.920 [INFO][5040] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:30.935450 env[1586]: 2025-11-01 00:19:30.920 [INFO][5040] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:30.935450 env[1586]: 2025-11-01 00:19:30.929 [WARNING][5040] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" HandleID="k8s-pod-network.901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0" Nov 1 00:19:30.935450 env[1586]: 2025-11-01 00:19:30.929 [INFO][5040] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" HandleID="k8s-pod-network.901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0" Nov 1 00:19:30.935450 env[1586]: 2025-11-01 00:19:30.931 [INFO][5040] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:30.935450 env[1586]: 2025-11-01 00:19:30.933 [INFO][5033] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" Nov 1 00:19:30.936020 env[1586]: time="2025-11-01T00:19:30.935979547Z" level=info msg="TearDown network for sandbox \"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15\" successfully" Nov 1 00:19:30.936092 env[1586]: time="2025-11-01T00:19:30.936075867Z" level=info msg="StopPodSandbox for \"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15\" returns successfully" Nov 1 00:19:30.936569 env[1586]: time="2025-11-01T00:19:30.936547626Z" level=info msg="RemovePodSandbox for \"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15\"" Nov 1 00:19:30.936749 env[1586]: time="2025-11-01T00:19:30.936706146Z" level=info msg="Forcibly stopping sandbox \"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15\"" Nov 1 00:19:30.999612 env[1586]: 2025-11-01 00:19:30.968 [WARNING][5054] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0", GenerateName:"calico-apiserver-6c8dcbbd64-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab7373cc-dd84-417d-8edc-59fbf979f4b4", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8dcbbd64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"7d179ea4a3567280e79e99f49cc38014ffa6e6497ed7931ec5964b8ff87306e8", Pod:"calico-apiserver-6c8dcbbd64-p85vf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1daaeff60d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:30.999612 env[1586]: 2025-11-01 00:19:30.968 [INFO][5054] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" Nov 1 00:19:30.999612 env[1586]: 2025-11-01 00:19:30.968 [INFO][5054] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" iface="eth0" netns="" Nov 1 00:19:30.999612 env[1586]: 2025-11-01 00:19:30.968 [INFO][5054] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" Nov 1 00:19:30.999612 env[1586]: 2025-11-01 00:19:30.968 [INFO][5054] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" Nov 1 00:19:30.999612 env[1586]: 2025-11-01 00:19:30.988 [INFO][5061] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" HandleID="k8s-pod-network.901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0" Nov 1 00:19:30.999612 env[1586]: 2025-11-01 00:19:30.988 [INFO][5061] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:30.999612 env[1586]: 2025-11-01 00:19:30.988 [INFO][5061] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:30.999612 env[1586]: 2025-11-01 00:19:30.995 [WARNING][5061] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" HandleID="k8s-pod-network.901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0" Nov 1 00:19:30.999612 env[1586]: 2025-11-01 00:19:30.996 [INFO][5061] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" HandleID="k8s-pod-network.901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--p85vf-eth0" Nov 1 00:19:30.999612 env[1586]: 2025-11-01 00:19:30.997 [INFO][5061] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:30.999612 env[1586]: 2025-11-01 00:19:30.998 [INFO][5054] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15" Nov 1 00:19:31.000175 env[1586]: time="2025-11-01T00:19:31.000133744Z" level=info msg="TearDown network for sandbox \"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15\" successfully" Nov 1 00:19:31.010353 env[1586]: time="2025-11-01T00:19:31.010316577Z" level=info msg="RemovePodSandbox \"901ee393709af5927cd6675ef0f9fd79975f43bedff4f6b30494fa372d4e5b15\" returns successfully" Nov 1 00:19:31.010953 env[1586]: time="2025-11-01T00:19:31.010920256Z" level=info msg="StopPodSandbox for \"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a\"" Nov 1 00:19:31.077704 env[1586]: 2025-11-01 00:19:31.047 [WARNING][5076] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-whisker--69dc8bd568--8tbvd-eth0" Nov 1 00:19:31.077704 env[1586]: 2025-11-01 00:19:31.047 [INFO][5076] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" Nov 1 00:19:31.077704 env[1586]: 2025-11-01 00:19:31.047 [INFO][5076] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" iface="eth0" netns="" Nov 1 00:19:31.077704 env[1586]: 2025-11-01 00:19:31.047 [INFO][5076] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" Nov 1 00:19:31.077704 env[1586]: 2025-11-01 00:19:31.047 [INFO][5076] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" Nov 1 00:19:31.077704 env[1586]: 2025-11-01 00:19:31.065 [INFO][5083] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" HandleID="k8s-pod-network.6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" Workload="ci--3510.3.8--n--c51a7922c9-k8s-whisker--69dc8bd568--8tbvd-eth0" Nov 1 00:19:31.077704 env[1586]: 2025-11-01 00:19:31.065 [INFO][5083] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:31.077704 env[1586]: 2025-11-01 00:19:31.065 [INFO][5083] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:31.077704 env[1586]: 2025-11-01 00:19:31.073 [WARNING][5083] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" HandleID="k8s-pod-network.6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" Workload="ci--3510.3.8--n--c51a7922c9-k8s-whisker--69dc8bd568--8tbvd-eth0" Nov 1 00:19:31.077704 env[1586]: 2025-11-01 00:19:31.073 [INFO][5083] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" HandleID="k8s-pod-network.6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" Workload="ci--3510.3.8--n--c51a7922c9-k8s-whisker--69dc8bd568--8tbvd-eth0" Nov 1 00:19:31.077704 env[1586]: 2025-11-01 00:19:31.074 [INFO][5083] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:31.077704 env[1586]: 2025-11-01 00:19:31.076 [INFO][5076] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" Nov 1 00:19:31.078094 env[1586]: time="2025-11-01T00:19:31.077742612Z" level=info msg="TearDown network for sandbox \"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a\" successfully" Nov 1 00:19:31.078094 env[1586]: time="2025-11-01T00:19:31.077772492Z" level=info msg="StopPodSandbox for \"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a\" returns successfully" Nov 1 00:19:31.078361 env[1586]: time="2025-11-01T00:19:31.078277972Z" level=info msg="RemovePodSandbox for \"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a\"" Nov 1 00:19:31.078423 env[1586]: time="2025-11-01T00:19:31.078355732Z" level=info msg="Forcibly stopping sandbox \"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a\"" Nov 1 00:19:31.138879 env[1586]: 2025-11-01 00:19:31.107 [WARNING][5097] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" WorkloadEndpoint="ci--3510.3.8--n--c51a7922c9-k8s-whisker--69dc8bd568--8tbvd-eth0" Nov 1 00:19:31.138879 env[1586]: 2025-11-01 00:19:31.107 [INFO][5097] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" Nov 1 00:19:31.138879 env[1586]: 2025-11-01 00:19:31.107 [INFO][5097] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" iface="eth0" netns="" Nov 1 00:19:31.138879 env[1586]: 2025-11-01 00:19:31.107 [INFO][5097] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" Nov 1 00:19:31.138879 env[1586]: 2025-11-01 00:19:31.107 [INFO][5097] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" Nov 1 00:19:31.138879 env[1586]: 2025-11-01 00:19:31.127 [INFO][5105] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" HandleID="k8s-pod-network.6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" Workload="ci--3510.3.8--n--c51a7922c9-k8s-whisker--69dc8bd568--8tbvd-eth0" Nov 1 00:19:31.138879 env[1586]: 2025-11-01 00:19:31.127 [INFO][5105] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:31.138879 env[1586]: 2025-11-01 00:19:31.127 [INFO][5105] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:31.138879 env[1586]: 2025-11-01 00:19:31.135 [WARNING][5105] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" HandleID="k8s-pod-network.6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" Workload="ci--3510.3.8--n--c51a7922c9-k8s-whisker--69dc8bd568--8tbvd-eth0" Nov 1 00:19:31.138879 env[1586]: 2025-11-01 00:19:31.135 [INFO][5105] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" HandleID="k8s-pod-network.6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" Workload="ci--3510.3.8--n--c51a7922c9-k8s-whisker--69dc8bd568--8tbvd-eth0" Nov 1 00:19:31.138879 env[1586]: 2025-11-01 00:19:31.136 [INFO][5105] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:31.138879 env[1586]: 2025-11-01 00:19:31.137 [INFO][5097] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a" Nov 1 00:19:31.140347 env[1586]: time="2025-11-01T00:19:31.138849451Z" level=info msg="TearDown network for sandbox \"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a\" successfully" Nov 1 00:19:31.147721 env[1586]: time="2025-11-01T00:19:31.147691565Z" level=info msg="RemovePodSandbox \"6388c62029979a9027ef19e862c275851108526d9303451d470036986820324a\" returns successfully" Nov 1 00:19:31.148248 env[1586]: time="2025-11-01T00:19:31.148225725Z" level=info msg="StopPodSandbox for \"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a\"" Nov 1 00:19:31.221827 env[1586]: 2025-11-01 00:19:31.184 [WARNING][5119] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a7634ab8-ff62-48dd-9eee-61be2b01d0bb", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7", Pod:"coredns-668d6bf9bc-87vvp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22593b6a12a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:31.221827 env[1586]: 2025-11-01 00:19:31.184 [INFO][5119] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" Nov 1 00:19:31.221827 env[1586]: 2025-11-01 00:19:31.184 [INFO][5119] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" iface="eth0" netns="" Nov 1 00:19:31.221827 env[1586]: 2025-11-01 00:19:31.184 [INFO][5119] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" Nov 1 00:19:31.221827 env[1586]: 2025-11-01 00:19:31.184 [INFO][5119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" Nov 1 00:19:31.221827 env[1586]: 2025-11-01 00:19:31.209 [INFO][5126] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" HandleID="k8s-pod-network.a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0" Nov 1 00:19:31.221827 env[1586]: 2025-11-01 00:19:31.209 [INFO][5126] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:31.221827 env[1586]: 2025-11-01 00:19:31.209 [INFO][5126] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:31.221827 env[1586]: 2025-11-01 00:19:31.218 [WARNING][5126] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" HandleID="k8s-pod-network.a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0" Nov 1 00:19:31.221827 env[1586]: 2025-11-01 00:19:31.218 [INFO][5126] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" HandleID="k8s-pod-network.a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0" Nov 1 00:19:31.221827 env[1586]: 2025-11-01 00:19:31.219 [INFO][5126] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:31.221827 env[1586]: 2025-11-01 00:19:31.220 [INFO][5119] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" Nov 1 00:19:31.222403 env[1586]: time="2025-11-01T00:19:31.222367476Z" level=info msg="TearDown network for sandbox \"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a\" successfully" Nov 1 00:19:31.222485 env[1586]: time="2025-11-01T00:19:31.222469195Z" level=info msg="StopPodSandbox for \"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a\" returns successfully" Nov 1 00:19:31.224268 env[1586]: time="2025-11-01T00:19:31.223022795Z" level=info msg="RemovePodSandbox for \"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a\"" Nov 1 00:19:31.224268 env[1586]: time="2025-11-01T00:19:31.223057035Z" level=info msg="Forcibly stopping sandbox \"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a\"" Nov 1 00:19:31.297309 env[1586]: 2025-11-01 00:19:31.255 [WARNING][5141] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a7634ab8-ff62-48dd-9eee-61be2b01d0bb", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"6761a414258af3602bf4d4679b331667547f2ae843a5934422d35cce692a9da7", Pod:"coredns-668d6bf9bc-87vvp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22593b6a12a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:31.297309 env[1586]: 2025-11-01 00:19:31.256 [INFO][5141] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" Nov 1 00:19:31.297309 env[1586]: 2025-11-01 00:19:31.256 [INFO][5141] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" iface="eth0" netns="" Nov 1 00:19:31.297309 env[1586]: 2025-11-01 00:19:31.256 [INFO][5141] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" Nov 1 00:19:31.297309 env[1586]: 2025-11-01 00:19:31.256 [INFO][5141] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" Nov 1 00:19:31.297309 env[1586]: 2025-11-01 00:19:31.274 [INFO][5148] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" HandleID="k8s-pod-network.a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0" Nov 1 00:19:31.297309 env[1586]: 2025-11-01 00:19:31.275 [INFO][5148] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:31.297309 env[1586]: 2025-11-01 00:19:31.275 [INFO][5148] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:31.297309 env[1586]: 2025-11-01 00:19:31.291 [WARNING][5148] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" HandleID="k8s-pod-network.a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0" Nov 1 00:19:31.297309 env[1586]: 2025-11-01 00:19:31.291 [INFO][5148] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" HandleID="k8s-pod-network.a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--87vvp-eth0" Nov 1 00:19:31.297309 env[1586]: 2025-11-01 00:19:31.293 [INFO][5148] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:31.297309 env[1586]: 2025-11-01 00:19:31.296 [INFO][5141] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a" Nov 1 00:19:31.297731 env[1586]: time="2025-11-01T00:19:31.297345466Z" level=info msg="TearDown network for sandbox \"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a\" successfully" Nov 1 00:19:31.307012 env[1586]: time="2025-11-01T00:19:31.306949659Z" level=info msg="RemovePodSandbox \"a0ef1075264260412de2e2e84f3dceca6e7d5309943d2a7cf8a5243bb360a67a\" returns successfully" Nov 1 00:19:31.307618 env[1586]: time="2025-11-01T00:19:31.307592539Z" level=info msg="StopPodSandbox for \"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9\"" Nov 1 00:19:31.358346 env[1586]: time="2025-11-01T00:19:31.358307985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:19:31.408336 env[1586]: 2025-11-01 00:19:31.348 [WARNING][5162] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0", GenerateName:"calico-kube-controllers-86c5674785-", Namespace:"calico-system", SelfLink:"", UID:"57cd90f3-35a2-40bb-93fb-693c3ffcd73d", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86c5674785", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b", Pod:"calico-kube-controllers-86c5674785-bs7n8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4cfc28f5684", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:31.408336 env[1586]: 2025-11-01 00:19:31.348 [INFO][5162] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" Nov 1 00:19:31.408336 env[1586]: 2025-11-01 00:19:31.348 [INFO][5162] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" iface="eth0" netns="" Nov 1 00:19:31.408336 env[1586]: 2025-11-01 00:19:31.348 [INFO][5162] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" Nov 1 00:19:31.408336 env[1586]: 2025-11-01 00:19:31.348 [INFO][5162] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" Nov 1 00:19:31.408336 env[1586]: 2025-11-01 00:19:31.394 [INFO][5169] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" HandleID="k8s-pod-network.99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0" Nov 1 00:19:31.408336 env[1586]: 2025-11-01 00:19:31.394 [INFO][5169] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:31.408336 env[1586]: 2025-11-01 00:19:31.394 [INFO][5169] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:31.408336 env[1586]: 2025-11-01 00:19:31.404 [WARNING][5169] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" HandleID="k8s-pod-network.99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0" Nov 1 00:19:31.408336 env[1586]: 2025-11-01 00:19:31.404 [INFO][5169] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" HandleID="k8s-pod-network.99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0" Nov 1 00:19:31.408336 env[1586]: 2025-11-01 00:19:31.405 [INFO][5169] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:31.408336 env[1586]: 2025-11-01 00:19:31.406 [INFO][5162] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" Nov 1 00:19:31.408336 env[1586]: time="2025-11-01T00:19:31.408274152Z" level=info msg="TearDown network for sandbox \"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9\" successfully" Nov 1 00:19:31.409926 env[1586]: time="2025-11-01T00:19:31.408315992Z" level=info msg="StopPodSandbox for \"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9\" returns successfully" Nov 1 00:19:31.410438 env[1586]: time="2025-11-01T00:19:31.410406190Z" level=info msg="RemovePodSandbox for \"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9\"" Nov 1 00:19:31.410488 env[1586]: time="2025-11-01T00:19:31.410445790Z" level=info msg="Forcibly stopping sandbox \"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9\"" Nov 1 00:19:31.507903 env[1586]: 2025-11-01 00:19:31.460 [WARNING][5183] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0", GenerateName:"calico-kube-controllers-86c5674785-", Namespace:"calico-system", SelfLink:"", UID:"57cd90f3-35a2-40bb-93fb-693c3ffcd73d", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86c5674785", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"fdc6edca14517e156a46e9cebcaeb939e0078635d422542b59847ce7a0b1569b", Pod:"calico-kube-controllers-86c5674785-bs7n8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4cfc28f5684", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:31.507903 env[1586]: 2025-11-01 00:19:31.460 [INFO][5183] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" Nov 1 00:19:31.507903 env[1586]: 2025-11-01 00:19:31.460 [INFO][5183] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" iface="eth0" netns="" Nov 1 00:19:31.507903 env[1586]: 2025-11-01 00:19:31.460 [INFO][5183] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" Nov 1 00:19:31.507903 env[1586]: 2025-11-01 00:19:31.460 [INFO][5183] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" Nov 1 00:19:31.507903 env[1586]: 2025-11-01 00:19:31.494 [INFO][5190] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" HandleID="k8s-pod-network.99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0" Nov 1 00:19:31.507903 env[1586]: 2025-11-01 00:19:31.494 [INFO][5190] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:31.507903 env[1586]: 2025-11-01 00:19:31.494 [INFO][5190] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:31.507903 env[1586]: 2025-11-01 00:19:31.503 [WARNING][5190] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" HandleID="k8s-pod-network.99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0" Nov 1 00:19:31.507903 env[1586]: 2025-11-01 00:19:31.503 [INFO][5190] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" HandleID="k8s-pod-network.99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--kube--controllers--86c5674785--bs7n8-eth0" Nov 1 00:19:31.507903 env[1586]: 2025-11-01 00:19:31.505 [INFO][5190] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:31.507903 env[1586]: 2025-11-01 00:19:31.506 [INFO][5183] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9" Nov 1 00:19:31.507903 env[1586]: time="2025-11-01T00:19:31.507768805Z" level=info msg="TearDown network for sandbox \"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9\" successfully" Nov 1 00:19:31.514958 env[1586]: time="2025-11-01T00:19:31.514916561Z" level=info msg="RemovePodSandbox \"99653e8afbe8ff326c65658fbd574168a9e407d888d1a97717a34b42b356abb9\" returns successfully" Nov 1 00:19:31.515467 env[1586]: time="2025-11-01T00:19:31.515437920Z" level=info msg="StopPodSandbox for \"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24\"" Nov 1 00:19:31.586177 env[1586]: 2025-11-01 00:19:31.552 [WARNING][5205] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0", GenerateName:"calico-apiserver-6c8dcbbd64-", Namespace:"calico-apiserver", SelfLink:"", UID:"da0e9dac-d5af-4669-8132-3ec847bb81ba", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8dcbbd64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba", Pod:"calico-apiserver-6c8dcbbd64-qwkg7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliada09114b1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:31.586177 env[1586]: 2025-11-01 00:19:31.552 [INFO][5205] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" Nov 1 00:19:31.586177 env[1586]: 2025-11-01 00:19:31.552 [INFO][5205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" iface="eth0" netns="" Nov 1 00:19:31.586177 env[1586]: 2025-11-01 00:19:31.552 [INFO][5205] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" Nov 1 00:19:31.586177 env[1586]: 2025-11-01 00:19:31.552 [INFO][5205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" Nov 1 00:19:31.586177 env[1586]: 2025-11-01 00:19:31.570 [INFO][5212] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" HandleID="k8s-pod-network.f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0" Nov 1 00:19:31.586177 env[1586]: 2025-11-01 00:19:31.570 [INFO][5212] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:31.586177 env[1586]: 2025-11-01 00:19:31.570 [INFO][5212] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:31.586177 env[1586]: 2025-11-01 00:19:31.580 [WARNING][5212] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" HandleID="k8s-pod-network.f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0" Nov 1 00:19:31.586177 env[1586]: 2025-11-01 00:19:31.580 [INFO][5212] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" HandleID="k8s-pod-network.f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0" Nov 1 00:19:31.586177 env[1586]: 2025-11-01 00:19:31.582 [INFO][5212] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:31.586177 env[1586]: 2025-11-01 00:19:31.583 [INFO][5205] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" Nov 1 00:19:31.586826 env[1586]: time="2025-11-01T00:19:31.586790073Z" level=info msg="TearDown network for sandbox \"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24\" successfully" Nov 1 00:19:31.586901 env[1586]: time="2025-11-01T00:19:31.586884873Z" level=info msg="StopPodSandbox for \"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24\" returns successfully" Nov 1 00:19:31.590808 env[1586]: time="2025-11-01T00:19:31.590366390Z" level=info msg="RemovePodSandbox for \"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24\"" Nov 1 00:19:31.590808 env[1586]: time="2025-11-01T00:19:31.590408710Z" level=info msg="Forcibly stopping sandbox \"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24\"" Nov 1 00:19:31.603119 env[1586]: time="2025-11-01T00:19:31.603062222Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:19:31.610997 env[1586]: time="2025-11-01T00:19:31.610922457Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:19:31.611209 kubelet[2677]: E1101 00:19:31.611149 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:19:31.611209 kubelet[2677]: E1101 00:19:31.611199 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:19:31.613875 kubelet[2677]: E1101 00:19:31.613809 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3429d9572f3c4ccbba53eb23e40c8366,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rqvq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b964bb46-4rknd_calico-system(7f2d9b8c-e77a-4876-aeb0-3b35b890f02a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:19:31.616209 env[1586]: time="2025-11-01T00:19:31.616172413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:19:31.705440 env[1586]: 2025-11-01 00:19:31.644 [WARNING][5226] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0", GenerateName:"calico-apiserver-6c8dcbbd64-", Namespace:"calico-apiserver", SelfLink:"", UID:"da0e9dac-d5af-4669-8132-3ec847bb81ba", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c8dcbbd64", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"3d12e0cf8e497455f6a4f9278d47f0d78754292b5d61bf288d2aecadb11ebcba", Pod:"calico-apiserver-6c8dcbbd64-qwkg7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliada09114b1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:31.705440 env[1586]: 2025-11-01 00:19:31.644 [INFO][5226] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" Nov 1 00:19:31.705440 env[1586]: 2025-11-01 00:19:31.644 [INFO][5226] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" iface="eth0" netns="" Nov 1 00:19:31.705440 env[1586]: 2025-11-01 00:19:31.644 [INFO][5226] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" Nov 1 00:19:31.705440 env[1586]: 2025-11-01 00:19:31.644 [INFO][5226] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" Nov 1 00:19:31.705440 env[1586]: 2025-11-01 00:19:31.667 [INFO][5233] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" HandleID="k8s-pod-network.f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0" Nov 1 00:19:31.705440 env[1586]: 2025-11-01 00:19:31.667 [INFO][5233] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:31.705440 env[1586]: 2025-11-01 00:19:31.667 [INFO][5233] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:31.705440 env[1586]: 2025-11-01 00:19:31.690 [WARNING][5233] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" HandleID="k8s-pod-network.f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0" Nov 1 00:19:31.705440 env[1586]: 2025-11-01 00:19:31.690 [INFO][5233] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" HandleID="k8s-pod-network.f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" Workload="ci--3510.3.8--n--c51a7922c9-k8s-calico--apiserver--6c8dcbbd64--qwkg7-eth0" Nov 1 00:19:31.705440 env[1586]: 2025-11-01 00:19:31.693 [INFO][5233] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:31.705440 env[1586]: 2025-11-01 00:19:31.694 [INFO][5226] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24" Nov 1 00:19:31.705957 env[1586]: time="2025-11-01T00:19:31.705924673Z" level=info msg="TearDown network for sandbox \"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24\" successfully" Nov 1 00:19:31.713853 env[1586]: time="2025-11-01T00:19:31.713803628Z" level=info msg="RemovePodSandbox \"f1e200d3c32a08714c7640ea5b7bc53a47fcb91a1ca8fbbbe0939cef6fc22c24\" returns successfully" Nov 1 00:19:31.714675 env[1586]: time="2025-11-01T00:19:31.714649187Z" level=info msg="StopPodSandbox for \"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455\"" Nov 1 00:19:31.822592 env[1586]: 2025-11-01 00:19:31.770 [WARNING][5248] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a4048da5-d286-44c8-9ec0-180e591b9eec", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01", Pod:"coredns-668d6bf9bc-q8677", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc14c83cef6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:31.822592 env[1586]: 2025-11-01 00:19:31.771 [INFO][5248] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" Nov 1 00:19:31.822592 env[1586]: 2025-11-01 00:19:31.771 [INFO][5248] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" iface="eth0" netns="" Nov 1 00:19:31.822592 env[1586]: 2025-11-01 00:19:31.771 [INFO][5248] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" Nov 1 00:19:31.822592 env[1586]: 2025-11-01 00:19:31.771 [INFO][5248] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" Nov 1 00:19:31.822592 env[1586]: 2025-11-01 00:19:31.794 [INFO][5256] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" HandleID="k8s-pod-network.2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0" Nov 1 00:19:31.822592 env[1586]: 2025-11-01 00:19:31.794 [INFO][5256] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:31.822592 env[1586]: 2025-11-01 00:19:31.796 [INFO][5256] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:31.822592 env[1586]: 2025-11-01 00:19:31.818 [WARNING][5256] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" HandleID="k8s-pod-network.2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0" Nov 1 00:19:31.822592 env[1586]: 2025-11-01 00:19:31.818 [INFO][5256] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" HandleID="k8s-pod-network.2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0" Nov 1 00:19:31.822592 env[1586]: 2025-11-01 00:19:31.819 [INFO][5256] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:31.822592 env[1586]: 2025-11-01 00:19:31.821 [INFO][5248] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" Nov 1 00:19:31.823130 env[1586]: time="2025-11-01T00:19:31.823085675Z" level=info msg="TearDown network for sandbox \"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455\" successfully" Nov 1 00:19:31.823207 env[1586]: time="2025-11-01T00:19:31.823190715Z" level=info msg="StopPodSandbox for \"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455\" returns successfully" Nov 1 00:19:31.828366 env[1586]: time="2025-11-01T00:19:31.828336672Z" level=info msg="RemovePodSandbox for \"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455\"" Nov 1 00:19:31.828552 env[1586]: time="2025-11-01T00:19:31.828512592Z" level=info msg="Forcibly stopping sandbox \"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455\"" Nov 1 00:19:31.881098 env[1586]: time="2025-11-01T00:19:31.881050277Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:19:31.884513 env[1586]: time="2025-11-01T00:19:31.884425554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:19:31.887197 kubelet[2677]: E1101 00:19:31.884995 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:19:31.887197 kubelet[2677]: E1101 00:19:31.885090 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:19:31.887197 kubelet[2677]: E1101 00:19:31.885323 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqvq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b964bb46-4rknd_calico-system(7f2d9b8c-e77a-4876-aeb0-3b35b890f02a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:19:31.887197 kubelet[2677]: E1101 00:19:31.887047 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b964bb46-4rknd" podUID="7f2d9b8c-e77a-4876-aeb0-3b35b890f02a" Nov 1 00:19:31.933396 env[1586]: 2025-11-01 00:19:31.878 [WARNING][5271] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a4048da5-d286-44c8-9ec0-180e591b9eec", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"6e25a0da95d81e86b701606a50d233d7e1a39ce3c66b64c4a029b1ca98fcec01", Pod:"coredns-668d6bf9bc-q8677", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc14c83cef6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:31.933396 env[1586]: 2025-11-01 00:19:31.879 [INFO][5271] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" Nov 1 00:19:31.933396 env[1586]: 2025-11-01 00:19:31.879 [INFO][5271] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" iface="eth0" netns="" Nov 1 00:19:31.933396 env[1586]: 2025-11-01 00:19:31.879 [INFO][5271] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" Nov 1 00:19:31.933396 env[1586]: 2025-11-01 00:19:31.879 [INFO][5271] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" Nov 1 00:19:31.933396 env[1586]: 2025-11-01 00:19:31.915 [INFO][5278] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" HandleID="k8s-pod-network.2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0" Nov 1 00:19:31.933396 env[1586]: 2025-11-01 00:19:31.915 [INFO][5278] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:31.933396 env[1586]: 2025-11-01 00:19:31.915 [INFO][5278] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:31.933396 env[1586]: 2025-11-01 00:19:31.929 [WARNING][5278] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" HandleID="k8s-pod-network.2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0" Nov 1 00:19:31.933396 env[1586]: 2025-11-01 00:19:31.929 [INFO][5278] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" HandleID="k8s-pod-network.2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" Workload="ci--3510.3.8--n--c51a7922c9-k8s-coredns--668d6bf9bc--q8677-eth0" Nov 1 00:19:31.933396 env[1586]: 2025-11-01 00:19:31.930 [INFO][5278] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:31.933396 env[1586]: 2025-11-01 00:19:31.932 [INFO][5271] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455" Nov 1 00:19:31.933949 env[1586]: time="2025-11-01T00:19:31.933905001Z" level=info msg="TearDown network for sandbox \"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455\" successfully" Nov 1 00:19:31.941839 env[1586]: time="2025-11-01T00:19:31.941802916Z" level=info msg="RemovePodSandbox \"2e740cc5c391ba94326e267d08470d612732df8ca7ae6f691d2fd42f3dc56455\" returns successfully" Nov 1 00:19:31.944852 env[1586]: time="2025-11-01T00:19:31.944826274Z" level=info msg="StopPodSandbox for \"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857\"" Nov 1 00:19:32.076235 env[1586]: 2025-11-01 00:19:32.002 [WARNING][5292] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e50a05e-0803-4e20-bd2b-ccf8c9d67c23", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5", Pod:"csi-node-driver-4mt97", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8adfc9d5d37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:32.076235 env[1586]: 2025-11-01 00:19:32.003 [INFO][5292] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" Nov 1 00:19:32.076235 env[1586]: 2025-11-01 00:19:32.003 [INFO][5292] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" iface="eth0" netns="" Nov 1 00:19:32.076235 env[1586]: 2025-11-01 00:19:32.003 [INFO][5292] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" Nov 1 00:19:32.076235 env[1586]: 2025-11-01 00:19:32.003 [INFO][5292] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" Nov 1 00:19:32.076235 env[1586]: 2025-11-01 00:19:32.063 [INFO][5299] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" HandleID="k8s-pod-network.28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" Workload="ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0" Nov 1 00:19:32.076235 env[1586]: 2025-11-01 00:19:32.063 [INFO][5299] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:32.076235 env[1586]: 2025-11-01 00:19:32.063 [INFO][5299] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:32.076235 env[1586]: 2025-11-01 00:19:32.072 [WARNING][5299] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" HandleID="k8s-pod-network.28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" Workload="ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0" Nov 1 00:19:32.076235 env[1586]: 2025-11-01 00:19:32.072 [INFO][5299] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" HandleID="k8s-pod-network.28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" Workload="ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0" Nov 1 00:19:32.076235 env[1586]: 2025-11-01 00:19:32.073 [INFO][5299] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:32.076235 env[1586]: 2025-11-01 00:19:32.074 [INFO][5292] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" Nov 1 00:19:32.076812 env[1586]: time="2025-11-01T00:19:32.076763587Z" level=info msg="TearDown network for sandbox \"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857\" successfully" Nov 1 00:19:32.076888 env[1586]: time="2025-11-01T00:19:32.076871987Z" level=info msg="StopPodSandbox for \"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857\" returns successfully" Nov 1 00:19:32.077764 env[1586]: time="2025-11-01T00:19:32.077738466Z" level=info msg="RemovePodSandbox for \"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857\"" Nov 1 00:19:32.078012 env[1586]: time="2025-11-01T00:19:32.077963106Z" level=info msg="Forcibly stopping sandbox \"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857\"" Nov 1 00:19:32.201847 env[1586]: 2025-11-01 00:19:32.133 [WARNING][5314] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8e50a05e-0803-4e20-bd2b-ccf8c9d67c23", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 18, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.8-n-c51a7922c9", ContainerID:"206236d4118de9711e41090c6ea78bdefa606aae17d6f5b028445ef81dc999b5", Pod:"csi-node-driver-4mt97", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8adfc9d5d37", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:19:32.201847 env[1586]: 2025-11-01 00:19:32.133 [INFO][5314] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" Nov 1 00:19:32.201847 env[1586]: 2025-11-01 00:19:32.133 [INFO][5314] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" iface="eth0" netns="" Nov 1 00:19:32.201847 env[1586]: 2025-11-01 00:19:32.133 [INFO][5314] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" Nov 1 00:19:32.201847 env[1586]: 2025-11-01 00:19:32.134 [INFO][5314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" Nov 1 00:19:32.201847 env[1586]: 2025-11-01 00:19:32.180 [INFO][5321] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" HandleID="k8s-pod-network.28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" Workload="ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0" Nov 1 00:19:32.201847 env[1586]: 2025-11-01 00:19:32.180 [INFO][5321] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:19:32.201847 env[1586]: 2025-11-01 00:19:32.180 [INFO][5321] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:19:32.201847 env[1586]: 2025-11-01 00:19:32.193 [WARNING][5321] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" HandleID="k8s-pod-network.28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" Workload="ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0" Nov 1 00:19:32.201847 env[1586]: 2025-11-01 00:19:32.193 [INFO][5321] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" HandleID="k8s-pod-network.28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" Workload="ci--3510.3.8--n--c51a7922c9-k8s-csi--node--driver--4mt97-eth0" Nov 1 00:19:32.201847 env[1586]: 2025-11-01 00:19:32.199 [INFO][5321] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:19:32.201847 env[1586]: 2025-11-01 00:19:32.200 [INFO][5314] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857" Nov 1 00:19:32.202310 env[1586]: time="2025-11-01T00:19:32.201873264Z" level=info msg="TearDown network for sandbox \"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857\" successfully" Nov 1 00:19:32.208802 env[1586]: time="2025-11-01T00:19:32.208742940Z" level=info msg="RemovePodSandbox \"28541518c0cbca8c8d327bae3cff0afbf68e90955ad80784192aed358a54d857\" returns successfully" Nov 1 00:19:33.353898 env[1586]: time="2025-11-01T00:19:33.353837506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:19:33.588737 env[1586]: time="2025-11-01T00:19:33.588692352Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:19:33.592421 env[1586]: time="2025-11-01T00:19:33.592375030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:19:33.592749 kubelet[2677]: E1101 00:19:33.592705 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:19:33.593029 kubelet[2677]: E1101 00:19:33.592758 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:19:33.593029 kubelet[2677]: E1101 00:19:33.592873 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gzr2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86c5674785-bs7n8_calico-system(57cd90f3-35a2-40bb-93fb-693c3ffcd73d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:19:33.594379 kubelet[2677]: E1101 00:19:33.594334 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" podUID="57cd90f3-35a2-40bb-93fb-693c3ffcd73d" Nov 1 00:19:38.353906 env[1586]: time="2025-11-01T00:19:38.353852822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:19:38.605462 env[1586]: time="2025-11-01T00:19:38.605280745Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:19:38.608459 env[1586]: time="2025-11-01T00:19:38.608394863Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:19:38.608799 kubelet[2677]: E1101 00:19:38.608746 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:19:38.609107 kubelet[2677]: E1101 00:19:38.608814 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:19:38.609107 kubelet[2677]: E1101 00:19:38.608960 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7dtq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pw8c5_calico-system(1e69bd0a-b324-4064-9086-3d6aa0d23b51): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:19:38.610465 kubelet[2677]: E1101 00:19:38.610431 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pw8c5" podUID="1e69bd0a-b324-4064-9086-3d6aa0d23b51" Nov 1 00:19:39.353661 env[1586]: time="2025-11-01T00:19:39.353411518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:19:39.645432 env[1586]: time="2025-11-01T00:19:39.645157257Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:19:39.649961 env[1586]: time="2025-11-01T00:19:39.649849774Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:19:39.650219 kubelet[2677]: E1101 00:19:39.650185 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:19:39.650572 kubelet[2677]: E1101 00:19:39.650550 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:19:39.650788 kubelet[2677]: E1101 00:19:39.650747 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfrqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8dcbbd64-qwkg7_calico-apiserver(da0e9dac-d5af-4669-8132-3ec847bb81ba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:19:39.652115 kubelet[2677]: E1101 00:19:39.652060 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" podUID="da0e9dac-d5af-4669-8132-3ec847bb81ba" Nov 1 00:19:40.354537 env[1586]: time="2025-11-01T00:19:40.354409098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:19:40.587482 env[1586]: time="2025-11-01T00:19:40.587434154Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:19:40.590046 env[1586]: time="2025-11-01T00:19:40.589990992Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:19:40.590372 kubelet[2677]: E1101 00:19:40.590333 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:19:40.590444 kubelet[2677]: E1101 00:19:40.590391 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:19:40.590572 kubelet[2677]: E1101 00:19:40.590518 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kh9cr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8dcbbd64-p85vf_calico-apiserver(ab7373cc-dd84-417d-8edc-59fbf979f4b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:19:40.592013 kubelet[2677]: E1101 00:19:40.591980 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" podUID="ab7373cc-dd84-417d-8edc-59fbf979f4b4" Nov 1 00:19:41.353275 env[1586]: time="2025-11-01T00:19:41.353216284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:19:41.619586 env[1586]: time="2025-11-01T00:19:41.619452881Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:19:41.624471 env[1586]: time="2025-11-01T00:19:41.624412158Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:19:41.624706 kubelet[2677]: E1101 00:19:41.624667 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:19:41.625001 kubelet[2677]: E1101 00:19:41.624726 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:19:41.625001 kubelet[2677]: E1101 00:19:41.624856 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gnml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4mt97_calico-system(8e50a05e-0803-4e20-bd2b-ccf8c9d67c23): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:19:41.627212 env[1586]: time="2025-11-01T00:19:41.627162636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:19:41.861360 env[1586]: time="2025-11-01T00:19:41.861309933Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:19:41.868164 env[1586]: time="2025-11-01T00:19:41.868065129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:19:41.868403 kubelet[2677]: E1101 00:19:41.868340 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:19:41.868465 kubelet[2677]: E1101 00:19:41.868403 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:19:41.868597 kubelet[2677]: E1101 00:19:41.868548 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gnml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4mt97_calico-system(8e50a05e-0803-4e20-bd2b-ccf8c9d67c23): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:19:41.870015 kubelet[2677]: E1101 00:19:41.869895 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:19:44.353213 kubelet[2677]: E1101 00:19:44.353179 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" podUID="57cd90f3-35a2-40bb-93fb-693c3ffcd73d" Nov 1 00:19:44.354668 kubelet[2677]: E1101 00:19:44.354609 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b964bb46-4rknd" podUID="7f2d9b8c-e77a-4876-aeb0-3b35b890f02a" Nov 1 00:19:49.353787 kubelet[2677]: E1101 00:19:49.353747 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pw8c5" podUID="1e69bd0a-b324-4064-9086-3d6aa0d23b51" Nov 1 00:19:52.353865 kubelet[2677]: E1101 00:19:52.353604 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" podUID="ab7373cc-dd84-417d-8edc-59fbf979f4b4" Nov 1 00:19:54.354552 kubelet[2677]: E1101 00:19:54.354499 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" podUID="da0e9dac-d5af-4669-8132-3ec847bb81ba" Nov 1 00:19:54.790471 systemd[1]: run-containerd-runc-k8s.io-db6f76a2b7ab6ed2fb0e97e0b262e36f004b1750c5d73343a16030568d48efc1-runc.6htqpd.mount: Deactivated successfully. Nov 1 00:19:55.353538 kubelet[2677]: E1101 00:19:55.353467 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:19:56.353896 env[1586]: time="2025-11-01T00:19:56.353858266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:19:56.584412 env[1586]: time="2025-11-01T00:19:56.584359658Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:19:56.587099 env[1586]: time="2025-11-01T00:19:56.587035416Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:19:56.587335 kubelet[2677]: E1101 00:19:56.587268 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:19:56.587624 kubelet[2677]: E1101 00:19:56.587349 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:19:56.587624 kubelet[2677]: E1101 00:19:56.587453 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3429d9572f3c4ccbba53eb23e40c8366,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rqvq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b964bb46-4rknd_calico-system(7f2d9b8c-e77a-4876-aeb0-3b35b890f02a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:19:56.589639 env[1586]: time="2025-11-01T00:19:56.589606015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:19:56.833436 env[1586]: time="2025-11-01T00:19:56.833387359Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:19:56.836949 env[1586]: time="2025-11-01T00:19:56.836888237Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:19:56.837423 kubelet[2677]: E1101 00:19:56.837380 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:19:56.837562 kubelet[2677]: E1101 00:19:56.837543 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:19:56.838157 kubelet[2677]: E1101 00:19:56.837748 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqvq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b964bb46-4rknd_calico-system(7f2d9b8c-e77a-4876-aeb0-3b35b890f02a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:19:56.839524 kubelet[2677]: E1101 00:19:56.839479 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b964bb46-4rknd" podUID="7f2d9b8c-e77a-4876-aeb0-3b35b890f02a" Nov 1 00:19:59.353314 env[1586]: time="2025-11-01T00:19:59.353263248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:19:59.603129 env[1586]: time="2025-11-01T00:19:59.603027871Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:19:59.606208 env[1586]: time="2025-11-01T00:19:59.606107110Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:19:59.606470 kubelet[2677]: E1101 00:19:59.606433 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:19:59.606795 kubelet[2677]: E1101 00:19:59.606775 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:19:59.607027 kubelet[2677]: E1101 00:19:59.606982 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gzr2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86c5674785-bs7n8_calico-system(57cd90f3-35a2-40bb-93fb-693c3ffcd73d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:19:59.608587 kubelet[2677]: E1101 00:19:59.608559 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" podUID="57cd90f3-35a2-40bb-93fb-693c3ffcd73d" Nov 1 00:20:04.355383 env[1586]: time="2025-11-01T00:20:04.355097456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:20:04.588378 env[1586]: time="2025-11-01T00:20:04.588325211Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:20:04.592288 env[1586]: time="2025-11-01T00:20:04.592230689Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:20:04.592517 kubelet[2677]: E1101 00:20:04.592469 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:20:04.592789 kubelet[2677]: E1101 00:20:04.592528 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:20:04.592789 kubelet[2677]: E1101 00:20:04.592671 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7dtq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pw8c5_calico-system(1e69bd0a-b324-4064-9086-3d6aa0d23b51): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:20:04.594119 kubelet[2677]: E1101 00:20:04.594079 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pw8c5" podUID="1e69bd0a-b324-4064-9086-3d6aa0d23b51" Nov 1 00:20:05.354213 env[1586]: time="2025-11-01T00:20:05.354006841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:20:05.611123 env[1586]: time="2025-11-01T00:20:05.610893144Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:20:05.613995 env[1586]: time="2025-11-01T00:20:05.613892382Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:20:05.614247 kubelet[2677]: E1101 00:20:05.614191 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:20:05.614543 kubelet[2677]: E1101 00:20:05.614256 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:20:05.614543 kubelet[2677]: E1101 00:20:05.614399 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kh9cr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8dcbbd64-p85vf_calico-apiserver(ab7373cc-dd84-417d-8edc-59fbf979f4b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:20:05.615752 kubelet[2677]: E1101 00:20:05.615722 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" podUID="ab7373cc-dd84-417d-8edc-59fbf979f4b4" Nov 1 00:20:06.353782 env[1586]: time="2025-11-01T00:20:06.353577868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:20:06.599080 env[1586]: time="2025-11-01T00:20:06.599023931Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:20:06.601967 env[1586]: time="2025-11-01T00:20:06.601898445Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:20:06.602153 kubelet[2677]: E1101 00:20:06.602115 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:20:06.602255 kubelet[2677]: E1101 00:20:06.602239 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:20:06.602478 kubelet[2677]: E1101 00:20:06.602440 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfrqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8dcbbd64-qwkg7_calico-apiserver(da0e9dac-d5af-4669-8132-3ec847bb81ba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:20:06.604560 kubelet[2677]: E1101 00:20:06.604035 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" podUID="da0e9dac-d5af-4669-8132-3ec847bb81ba" Nov 1 00:20:07.354209 env[1586]: time="2025-11-01T00:20:07.354154569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:20:07.575992 env[1586]: time="2025-11-01T00:20:07.575933339Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:20:07.586722 env[1586]: time="2025-11-01T00:20:07.586639737Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:20:07.586927 kubelet[2677]: E1101 00:20:07.586889 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:20:07.587190 kubelet[2677]: E1101 00:20:07.586948 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:20:07.587471 kubelet[2677]: E1101 00:20:07.587073 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gnml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4mt97_calico-system(8e50a05e-0803-4e20-bd2b-ccf8c9d67c23): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:20:07.589516 env[1586]: time="2025-11-01T00:20:07.589477414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:20:07.832376 env[1586]: time="2025-11-01T00:20:07.832176309Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:20:07.835072 env[1586]: time="2025-11-01T00:20:07.834973426Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:20:07.835209 kubelet[2677]: E1101 00:20:07.835162 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:20:07.835280 kubelet[2677]: E1101 00:20:07.835219 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:20:07.835390 kubelet[2677]: E1101 00:20:07.835347 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gnml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4mt97_calico-system(8e50a05e-0803-4e20-bd2b-ccf8c9d67c23): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:20:07.836707 kubelet[2677]: E1101 00:20:07.836664 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:20:09.354065 kubelet[2677]: E1101 00:20:09.354006 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b964bb46-4rknd" podUID="7f2d9b8c-e77a-4876-aeb0-3b35b890f02a" Nov 1 00:20:10.355307 kubelet[2677]: E1101 00:20:10.355257 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" podUID="57cd90f3-35a2-40bb-93fb-693c3ffcd73d" Nov 1 00:20:19.353445 kubelet[2677]: E1101 00:20:19.353409 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" podUID="da0e9dac-d5af-4669-8132-3ec847bb81ba" Nov 1 00:20:19.354010 kubelet[2677]: E1101 00:20:19.353956 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pw8c5" podUID="1e69bd0a-b324-4064-9086-3d6aa0d23b51" Nov 1 00:20:19.354470 kubelet[2677]: E1101 00:20:19.354425 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:20:20.355247 kubelet[2677]: E1101 00:20:20.355153 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" podUID="ab7373cc-dd84-417d-8edc-59fbf979f4b4" Nov 1 00:20:22.353343 kubelet[2677]: E1101 00:20:22.353273 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" podUID="57cd90f3-35a2-40bb-93fb-693c3ffcd73d" Nov 1 00:20:24.354277 kubelet[2677]: E1101 00:20:24.354237 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b964bb46-4rknd" podUID="7f2d9b8c-e77a-4876-aeb0-3b35b890f02a" Nov 1 00:20:24.786262 systemd[1]: run-containerd-runc-k8s.io-db6f76a2b7ab6ed2fb0e97e0b262e36f004b1750c5d73343a16030568d48efc1-runc.LLnglf.mount: Deactivated successfully. Nov 1 00:20:30.355334 kubelet[2677]: E1101 00:20:30.355275 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pw8c5" podUID="1e69bd0a-b324-4064-9086-3d6aa0d23b51" Nov 1 00:20:30.355733 kubelet[2677]: E1101 00:20:30.355618 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" podUID="da0e9dac-d5af-4669-8132-3ec847bb81ba" Nov 1 00:20:33.353874 kubelet[2677]: E1101 00:20:33.353827 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:20:34.354020 kubelet[2677]: E1101 00:20:34.353898 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" podUID="ab7373cc-dd84-417d-8edc-59fbf979f4b4" Nov 1 00:20:35.352827 kubelet[2677]: E1101 00:20:35.352783 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" podUID="57cd90f3-35a2-40bb-93fb-693c3ffcd73d" Nov 1 00:20:38.353236 env[1586]: time="2025-11-01T00:20:38.353178326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:20:38.608967 env[1586]: time="2025-11-01T00:20:38.608835229Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:20:38.611618 env[1586]: time="2025-11-01T00:20:38.611562522Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:20:38.611893 kubelet[2677]: E1101 00:20:38.611860 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:20:38.612211 kubelet[2677]: E1101 00:20:38.612189 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:20:38.612440 kubelet[2677]: E1101 00:20:38.612397 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3429d9572f3c4ccbba53eb23e40c8366,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rqvq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b964bb46-4rknd_calico-system(7f2d9b8c-e77a-4876-aeb0-3b35b890f02a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:20:38.614710 env[1586]: time="2025-11-01T00:20:38.614678050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:20:38.857664 env[1586]: time="2025-11-01T00:20:38.857611962Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:20:38.861024 env[1586]: time="2025-11-01T00:20:38.860894929Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:20:38.861147 kubelet[2677]: E1101 00:20:38.861117 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:20:38.861198 kubelet[2677]: E1101 00:20:38.861161 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:20:38.861801 kubelet[2677]: E1101 00:20:38.861264 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqvq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b964bb46-4rknd_calico-system(7f2d9b8c-e77a-4876-aeb0-3b35b890f02a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:20:38.862751 kubelet[2677]: E1101 00:20:38.862705 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b964bb46-4rknd" podUID="7f2d9b8c-e77a-4876-aeb0-3b35b890f02a" Nov 1 00:20:42.354614 kubelet[2677]: E1101 00:20:42.354567 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pw8c5" podUID="1e69bd0a-b324-4064-9086-3d6aa0d23b51" Nov 1 00:20:45.353087 kubelet[2677]: E1101 00:20:45.353052 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" podUID="da0e9dac-d5af-4669-8132-3ec847bb81ba" Nov 1 00:20:47.353740 env[1586]: time="2025-11-01T00:20:47.353480361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:20:47.636000 env[1586]: time="2025-11-01T00:20:47.635882732Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:20:47.639053 env[1586]: time="2025-11-01T00:20:47.638985224Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:20:47.639437 kubelet[2677]: E1101 00:20:47.639397 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:20:47.639735 kubelet[2677]: E1101 00:20:47.639458 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:20:47.639735 kubelet[2677]: E1101 00:20:47.639588 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kh9cr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8dcbbd64-p85vf_calico-apiserver(ab7373cc-dd84-417d-8edc-59fbf979f4b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:20:47.641109 kubelet[2677]: E1101 00:20:47.641061 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" podUID="ab7373cc-dd84-417d-8edc-59fbf979f4b4" Nov 1 00:20:48.354591 env[1586]: time="2025-11-01T00:20:48.354539886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:20:48.610056 env[1586]: time="2025-11-01T00:20:48.609943969Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:20:48.612966 env[1586]: time="2025-11-01T00:20:48.612882223Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:20:48.613217 kubelet[2677]: E1101 00:20:48.613167 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:20:48.613312 kubelet[2677]: E1101 00:20:48.613235 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:20:48.613613 kubelet[2677]: E1101 00:20:48.613574 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gnml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4mt97_calico-system(8e50a05e-0803-4e20-bd2b-ccf8c9d67c23): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:20:48.615728 env[1586]: time="2025-11-01T00:20:48.615689278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:20:48.852890 env[1586]: time="2025-11-01T00:20:48.852831044Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:20:48.855563 env[1586]: time="2025-11-01T00:20:48.855485820Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:20:48.855786 kubelet[2677]: E1101 00:20:48.855739 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:20:48.856057 kubelet[2677]: E1101 00:20:48.855789 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:20:48.856057 kubelet[2677]: E1101 00:20:48.855921 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gnml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4mt97_calico-system(8e50a05e-0803-4e20-bd2b-ccf8c9d67c23): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:20:48.857307 kubelet[2677]: E1101 00:20:48.857243 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:20:49.353714 env[1586]: time="2025-11-01T00:20:49.353656017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:20:49.586608 env[1586]: time="2025-11-01T00:20:49.586549286Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:20:49.589412 env[1586]: time="2025-11-01T00:20:49.589349981Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:20:49.589659 kubelet[2677]: E1101 00:20:49.589610 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:20:49.589738 kubelet[2677]: E1101 00:20:49.589670 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:20:49.589857 kubelet[2677]: E1101 00:20:49.589806 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gzr2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86c5674785-bs7n8_calico-system(57cd90f3-35a2-40bb-93fb-693c3ffcd73d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:20:49.591448 kubelet[2677]: E1101 00:20:49.591416 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" podUID="57cd90f3-35a2-40bb-93fb-693c3ffcd73d" Nov 1 00:20:51.354264 kubelet[2677]: E1101 00:20:51.354224 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b964bb46-4rknd" podUID="7f2d9b8c-e77a-4876-aeb0-3b35b890f02a" Nov 1 00:20:54.788656 systemd[1]: run-containerd-runc-k8s.io-db6f76a2b7ab6ed2fb0e97e0b262e36f004b1750c5d73343a16030568d48efc1-runc.QNf8EF.mount: Deactivated successfully. Nov 1 00:20:55.353923 env[1586]: time="2025-11-01T00:20:55.353882201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:20:55.592643 env[1586]: time="2025-11-01T00:20:55.592591003Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:20:55.595833 env[1586]: time="2025-11-01T00:20:55.595781577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:20:55.596174 kubelet[2677]: E1101 00:20:55.596135 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:20:55.596539 kubelet[2677]: E1101 00:20:55.596186 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:20:55.596539 kubelet[2677]: E1101 00:20:55.596341 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7dtq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pw8c5_calico-system(1e69bd0a-b324-4064-9086-3d6aa0d23b51): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:20:55.597826 kubelet[2677]: E1101 00:20:55.597786 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pw8c5" podUID="1e69bd0a-b324-4064-9086-3d6aa0d23b51" Nov 1 00:20:58.356097 env[1586]: time="2025-11-01T00:20:58.355893282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:20:58.606677 env[1586]: time="2025-11-01T00:20:58.606554856Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:20:58.609888 env[1586]: time="2025-11-01T00:20:58.609831590Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:20:58.610226 kubelet[2677]: E1101 00:20:58.610185 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:20:58.610598 kubelet[2677]: E1101 00:20:58.610577 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:20:58.610819 kubelet[2677]: E1101 00:20:58.610780 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfrqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8dcbbd64-qwkg7_calico-apiserver(da0e9dac-d5af-4669-8132-3ec847bb81ba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:20:58.612186 kubelet[2677]: E1101 00:20:58.612098 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" podUID="da0e9dac-d5af-4669-8132-3ec847bb81ba" Nov 1 00:21:00.355136 kubelet[2677]: E1101 00:21:00.355066 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" podUID="57cd90f3-35a2-40bb-93fb-693c3ffcd73d" Nov 1 00:21:00.355967 kubelet[2677]: E1101 00:21:00.355650 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" podUID="ab7373cc-dd84-417d-8edc-59fbf979f4b4" Nov 1 00:21:00.355967 kubelet[2677]: E1101 00:21:00.355920 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:21:05.353387 kubelet[2677]: E1101 00:21:05.353330 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b964bb46-4rknd" podUID="7f2d9b8c-e77a-4876-aeb0-3b35b890f02a" Nov 1 00:21:07.352961 kubelet[2677]: E1101 00:21:07.352902 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pw8c5" podUID="1e69bd0a-b324-4064-9086-3d6aa0d23b51" Nov 1 00:21:12.353034 kubelet[2677]: E1101 00:21:12.352998 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" podUID="ab7373cc-dd84-417d-8edc-59fbf979f4b4" Nov 1 00:21:13.353126 kubelet[2677]: E1101 00:21:13.353090 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" podUID="da0e9dac-d5af-4669-8132-3ec847bb81ba" Nov 1 00:21:13.353615 kubelet[2677]: E1101 00:21:13.353552 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:21:14.354051 kubelet[2677]: E1101 00:21:14.354011 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" podUID="57cd90f3-35a2-40bb-93fb-693c3ffcd73d" Nov 1 00:21:17.354158 kubelet[2677]: E1101 00:21:17.354104 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b964bb46-4rknd" podUID="7f2d9b8c-e77a-4876-aeb0-3b35b890f02a" Nov 1 00:21:18.297940 systemd[1]: Started sshd@7-10.200.20.42:22-10.200.16.10:34734.service. Nov 1 00:21:18.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.42:22-10.200.16.10:34734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:18.324789 kernel: audit: type=1130 audit(1761956478.298:446): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.42:22-10.200.16.10:34734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:18.743000 audit[5454]: USER_ACCT pid=5454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:18.744273 sshd[5454]: Accepted publickey for core from 10.200.16.10 port 34734 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:21:18.746028 sshd[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:21:18.745000 audit[5454]: CRED_ACQ pid=5454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:18.793328 kernel: audit: type=1101 audit(1761956478.743:447): pid=5454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:18.793444 kernel: audit: type=1103 audit(1761956478.745:448): pid=5454 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:18.807550 kernel: audit: type=1006 audit(1761956478.745:449): pid=5454 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Nov 1 00:21:18.745000 audit[5454]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd4cb49b0 a2=3 a3=1 items=0 ppid=1 pid=5454 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:21:18.745000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:21:18.842712 kernel: audit: type=1300 audit(1761956478.745:449): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd4cb49b0 a2=3 a3=1 items=0 ppid=1 pid=5454 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:21:18.842799 kernel: audit: type=1327 audit(1761956478.745:449): proctitle=737368643A20636F7265205B707269765D Nov 1 00:21:18.837115 systemd[1]: Started session-10.scope. Nov 1 00:21:18.842346 systemd-logind[1567]: New session 10 of user core. Nov 1 00:21:18.851000 audit[5454]: USER_START pid=5454 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:18.879849 kernel: audit: type=1105 audit(1761956478.851:450): pid=5454 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:18.852000 audit[5457]: CRED_ACQ pid=5457 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:18.909320 kernel: audit: type=1103 audit(1761956478.852:451): pid=5457 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:19.244153 sshd[5454]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:19.244000 audit[5454]: USER_END pid=5454 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:19.274075 systemd[1]: sshd@7-10.200.20.42:22-10.200.16.10:34734.service: Deactivated successfully. Nov 1 00:21:19.275199 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:21:19.275246 systemd-logind[1567]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:21:19.244000 audit[5454]: CRED_DISP pid=5454 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:19.276619 systemd-logind[1567]: Removed session 10. Nov 1 00:21:19.298297 kernel: audit: type=1106 audit(1761956479.244:452): pid=5454 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:19.298407 kernel: audit: type=1104 audit(1761956479.244:453): pid=5454 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:19.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.42:22-10.200.16.10:34734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:21.353139 kubelet[2677]: E1101 00:21:21.353098 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pw8c5" podUID="1e69bd0a-b324-4064-9086-3d6aa0d23b51" Nov 1 00:21:24.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.42:22-10.200.16.10:57016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:24.309654 systemd[1]: Started sshd@8-10.200.20.42:22-10.200.16.10:57016.service. Nov 1 00:21:24.315135 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:21:24.315238 kernel: audit: type=1130 audit(1761956484.309:455): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.42:22-10.200.16.10:57016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:24.725184 sshd[5468]: Accepted publickey for core from 10.200.16.10 port 57016 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:21:24.724000 audit[5468]: USER_ACCT pid=5468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:24.750000 audit[5468]: CRED_ACQ pid=5468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:24.751002 sshd[5468]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:21:24.774507 kernel: audit: type=1101 audit(1761956484.724:456): pid=5468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:24.774635 kernel: audit: type=1103 audit(1761956484.750:457): pid=5468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:24.789727 kernel: audit: type=1006 audit(1761956484.750:458): pid=5468 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Nov 1 00:21:24.750000 audit[5468]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc123ca80 a2=3 a3=1 items=0 ppid=1 pid=5468 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:21:24.815995 kernel: audit: type=1300 audit(1761956484.750:458): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc123ca80 a2=3 a3=1 items=0 ppid=1 pid=5468 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:21:24.750000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:21:24.828985 kernel: audit: type=1327 audit(1761956484.750:458): proctitle=737368643A20636F7265205B707269765D Nov 1 00:21:24.827999 systemd[1]: run-containerd-runc-k8s.io-db6f76a2b7ab6ed2fb0e97e0b262e36f004b1750c5d73343a16030568d48efc1-runc.OQvIrB.mount: Deactivated successfully. Nov 1 00:21:24.831925 systemd[1]: Started session-11.scope. Nov 1 00:21:24.833010 systemd-logind[1567]: New session 11 of user core. Nov 1 00:21:24.843000 audit[5468]: USER_START pid=5468 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:24.873000 audit[5487]: CRED_ACQ pid=5487 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:24.897820 kernel: audit: type=1105 audit(1761956484.843:459): pid=5468 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:24.897962 kernel: audit: type=1103 audit(1761956484.873:460): pid=5487 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:25.224209 sshd[5468]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:25.224000 audit[5468]: USER_END pid=5468 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:25.253553 systemd-logind[1567]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:21:25.254697 systemd[1]: sshd@8-10.200.20.42:22-10.200.16.10:57016.service: Deactivated successfully. Nov 1 00:21:25.255498 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:21:25.256719 systemd-logind[1567]: Removed session 11. Nov 1 00:21:25.224000 audit[5468]: CRED_DISP pid=5468 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:25.291601 kernel: audit: type=1106 audit(1761956485.224:461): pid=5468 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:25.291742 kernel: audit: type=1104 audit(1761956485.224:462): pid=5468 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:25.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.42:22-10.200.16.10:57016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:25.353632 kubelet[2677]: E1101 00:21:25.353561 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:21:26.354271 kubelet[2677]: E1101 00:21:26.354214 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" podUID="57cd90f3-35a2-40bb-93fb-693c3ffcd73d" Nov 1 00:21:27.352840 kubelet[2677]: E1101 00:21:27.352795 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" podUID="da0e9dac-d5af-4669-8132-3ec847bb81ba" Nov 1 00:21:28.353292 kubelet[2677]: E1101 00:21:28.353223 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" podUID="ab7373cc-dd84-417d-8edc-59fbf979f4b4" Nov 1 00:21:30.298137 systemd[1]: Started sshd@9-10.200.20.42:22-10.200.16.10:33910.service. Nov 1 00:21:30.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.42:22-10.200.16.10:33910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:30.304416 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:21:30.304528 kernel: audit: type=1130 audit(1761956490.297:464): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.42:22-10.200.16.10:33910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:30.354457 kubelet[2677]: E1101 00:21:30.354405 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b964bb46-4rknd" podUID="7f2d9b8c-e77a-4876-aeb0-3b35b890f02a" Nov 1 00:21:30.744642 sshd[5504]: Accepted publickey for core from 10.200.16.10 port 33910 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:21:30.744000 audit[5504]: USER_ACCT pid=5504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:30.771047 sshd[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:21:30.770000 audit[5504]: CRED_ACQ pid=5504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:30.794309 kernel: audit: type=1101 audit(1761956490.744:465): pid=5504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:30.794452 kernel: audit: type=1103 audit(1761956490.770:466): pid=5504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:30.810938 kernel: audit: type=1006 audit(1761956490.770:467): pid=5504 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Nov 1 00:21:30.799671 systemd-logind[1567]: New session 12 of user core. Nov 1 00:21:30.800190 systemd[1]: Started session-12.scope. Nov 1 00:21:30.770000 audit[5504]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffee3b2a50 a2=3 a3=1 items=0 ppid=1 pid=5504 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:21:30.838988 kernel: audit: type=1300 audit(1761956490.770:467): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffee3b2a50 a2=3 a3=1 items=0 ppid=1 pid=5504 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:21:30.770000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:21:30.848748 kernel: audit: type=1327 audit(1761956490.770:467): proctitle=737368643A20636F7265205B707269765D Nov 1 00:21:30.848877 kernel: audit: type=1105 audit(1761956490.811:468): pid=5504 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:30.811000 audit[5504]: USER_START pid=5504 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:30.812000 audit[5509]: CRED_ACQ pid=5509 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:30.898547 kernel: audit: type=1103 audit(1761956490.812:469): pid=5509 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:31.168491 sshd[5504]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:31.170000 audit[5504]: USER_END pid=5504 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:31.172104 systemd-logind[1567]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:21:31.174640 systemd[1]: sshd@9-10.200.20.42:22-10.200.16.10:33910.service: Deactivated successfully. Nov 1 00:21:31.175498 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:21:31.176838 systemd-logind[1567]: Removed session 12. Nov 1 00:21:31.170000 audit[5504]: CRED_DISP pid=5504 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:31.221635 kernel: audit: type=1106 audit(1761956491.170:470): pid=5504 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:31.221765 kernel: audit: type=1104 audit(1761956491.170:471): pid=5504 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:31.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.42:22-10.200.16.10:33910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:32.354067 kubelet[2677]: E1101 00:21:32.354031 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pw8c5" podUID="1e69bd0a-b324-4064-9086-3d6aa0d23b51" Nov 1 00:21:36.236740 systemd[1]: Started sshd@10-10.200.20.42:22-10.200.16.10:33926.service. Nov 1 00:21:36.262711 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:21:36.262815 kernel: audit: type=1130 audit(1761956496.236:473): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.42:22-10.200.16.10:33926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:36.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.42:22-10.200.16.10:33926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:36.656000 audit[5519]: USER_ACCT pid=5519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:36.658763 sshd[5519]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:21:36.683636 sshd[5519]: Accepted publickey for core from 10.200.16.10 port 33926 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:21:36.686318 kernel: audit: type=1101 audit(1761956496.656:474): pid=5519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:36.658000 audit[5519]: CRED_ACQ pid=5519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:36.688475 systemd[1]: Started session-13.scope. Nov 1 00:21:36.689522 systemd-logind[1567]: New session 13 of user core. Nov 1 00:21:36.712306 kernel: audit: type=1103 audit(1761956496.658:475): pid=5519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:36.658000 audit[5519]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd7700a60 a2=3 a3=1 items=0 ppid=1 pid=5519 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:21:36.759307 kernel: audit: type=1006 audit(1761956496.658:476): pid=5519 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Nov 1 00:21:36.759395 kernel: audit: type=1300 audit(1761956496.658:476): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd7700a60 a2=3 a3=1 items=0 ppid=1 pid=5519 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:21:36.658000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:21:36.768637 kernel: audit: type=1327 audit(1761956496.658:476): proctitle=737368643A20636F7265205B707269765D Nov 1 00:21:36.712000 audit[5519]: USER_START pid=5519 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:36.797440 kernel: audit: type=1105 audit(1761956496.712:477): pid=5519 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:36.713000 audit[5524]: CRED_ACQ pid=5524 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:36.822129 kernel: audit: type=1103 audit(1761956496.713:478): pid=5524 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:37.075568 sshd[5519]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:37.076000 audit[5519]: USER_END pid=5519 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:37.086524 systemd-logind[1567]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:21:37.090631 systemd[1]: sshd@10-10.200.20.42:22-10.200.16.10:33926.service: Deactivated successfully. Nov 1 00:21:37.091501 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:21:37.093277 systemd-logind[1567]: Removed session 13. Nov 1 00:21:37.076000 audit[5519]: CRED_DISP pid=5519 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:37.131454 kernel: audit: type=1106 audit(1761956497.076:479): pid=5519 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:37.131580 kernel: audit: type=1104 audit(1761956497.076:480): pid=5519 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:37.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.42:22-10.200.16.10:33926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:37.142758 systemd[1]: Started sshd@11-10.200.20.42:22-10.200.16.10:33936.service. Nov 1 00:21:37.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.42:22-10.200.16.10:33936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:37.354187 kubelet[2677]: E1101 00:21:37.354083 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:21:37.570000 audit[5534]: USER_ACCT pid=5534 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:37.571431 sshd[5534]: Accepted publickey for core from 10.200.16.10 port 33936 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:21:37.573000 audit[5534]: CRED_ACQ pid=5534 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:37.573000 audit[5534]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd20a1800 a2=3 a3=1 items=0 ppid=1 pid=5534 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:21:37.573000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:21:37.573963 sshd[5534]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:21:37.578572 systemd-logind[1567]: New session 14 of user core. Nov 1 00:21:37.578971 systemd[1]: Started session-14.scope. Nov 1 00:21:37.583000 audit[5534]: USER_START pid=5534 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:37.584000 audit[5537]: CRED_ACQ pid=5537 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:37.994309 sshd[5534]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:37.995000 audit[5534]: USER_END pid=5534 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:37.995000 audit[5534]: CRED_DISP pid=5534 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:37.997532 systemd[1]: sshd@11-10.200.20.42:22-10.200.16.10:33936.service: Deactivated successfully. Nov 1 00:21:37.998333 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:21:37.998626 systemd-logind[1567]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:21:37.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.42:22-10.200.16.10:33936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:37.999684 systemd-logind[1567]: Removed session 14. Nov 1 00:21:38.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.42:22-10.200.16.10:33946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:38.061170 systemd[1]: Started sshd@12-10.200.20.42:22-10.200.16.10:33946.service. Nov 1 00:21:38.353628 kubelet[2677]: E1101 00:21:38.353538 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" podUID="da0e9dac-d5af-4669-8132-3ec847bb81ba" Nov 1 00:21:38.480000 audit[5545]: USER_ACCT pid=5545 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:38.481221 sshd[5545]: Accepted publickey for core from 10.200.16.10 port 33946 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:21:38.482000 audit[5545]: CRED_ACQ pid=5545 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:38.482000 audit[5545]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd142cbf0 a2=3 a3=1 items=0 ppid=1 pid=5545 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:21:38.482000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:21:38.482939 sshd[5545]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:21:38.487401 systemd[1]: Started session-15.scope. Nov 1 00:21:38.487584 systemd-logind[1567]: New session 15 of user core. Nov 1 00:21:38.491000 audit[5545]: USER_START pid=5545 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:38.493000 audit[5548]: CRED_ACQ pid=5548 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:38.893737 sshd[5545]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:38.894000 audit[5545]: USER_END pid=5545 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:38.894000 audit[5545]: CRED_DISP pid=5545 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:38.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.42:22-10.200.16.10:33946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:38.896838 systemd[1]: sshd@12-10.200.20.42:22-10.200.16.10:33946.service: Deactivated successfully. Nov 1 00:21:38.897649 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:21:38.897962 systemd-logind[1567]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:21:38.900669 systemd-logind[1567]: Removed session 15. Nov 1 00:21:41.353325 kubelet[2677]: E1101 00:21:41.353275 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" podUID="57cd90f3-35a2-40bb-93fb-693c3ffcd73d" Nov 1 00:21:42.353165 kubelet[2677]: E1101 00:21:42.353121 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" podUID="ab7373cc-dd84-417d-8edc-59fbf979f4b4" Nov 1 00:21:42.353982 kubelet[2677]: E1101 00:21:42.353942 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b964bb46-4rknd" podUID="7f2d9b8c-e77a-4876-aeb0-3b35b890f02a" Nov 1 00:21:43.960971 systemd[1]: Started sshd@13-10.200.20.42:22-10.200.16.10:41890.service. Nov 1 00:21:43.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.42:22-10.200.16.10:41890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:43.967058 kernel: kauditd_printk_skb: 23 callbacks suppressed Nov 1 00:21:43.967177 kernel: audit: type=1130 audit(1761956503.961:500): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.42:22-10.200.16.10:41890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:44.355644 kubelet[2677]: E1101 00:21:44.355540 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pw8c5" podUID="1e69bd0a-b324-4064-9086-3d6aa0d23b51" Nov 1 00:21:44.385594 sshd[5558]: Accepted publickey for core from 10.200.16.10 port 41890 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:21:44.385000 audit[5558]: USER_ACCT pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:44.413533 sshd[5558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:21:44.412000 audit[5558]: CRED_ACQ pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:44.440709 kernel: audit: type=1101 audit(1761956504.385:501): pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:44.440804 kernel: audit: type=1103 audit(1761956504.412:502): pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:44.456454 kernel: audit: type=1006 audit(1761956504.412:503): pid=5558 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Nov 1 00:21:44.445527 systemd-logind[1567]: New session 16 of user core. Nov 1 00:21:44.446128 systemd[1]: Started session-16.scope. Nov 1 00:21:44.412000 audit[5558]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffa8a3c60 a2=3 a3=1 items=0 ppid=1 pid=5558 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:21:44.485950 kernel: audit: type=1300 audit(1761956504.412:503): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffa8a3c60 a2=3 a3=1 items=0 ppid=1 pid=5558 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:21:44.412000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:21:44.496600 kernel: audit: type=1327 audit(1761956504.412:503): proctitle=737368643A20636F7265205B707269765D Nov 1 00:21:44.458000 audit[5558]: USER_START pid=5558 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:44.528657 kernel: audit: type=1105 audit(1761956504.458:504): pid=5558 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:44.459000 audit[5561]: CRED_ACQ pid=5561 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:44.586854 kernel: audit: type=1103 audit(1761956504.459:505): pid=5561 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:44.822568 sshd[5558]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:44.823000 audit[5558]: USER_END pid=5558 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:44.852470 systemd[1]: sshd@13-10.200.20.42:22-10.200.16.10:41890.service: Deactivated successfully. Nov 1 00:21:44.853716 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:21:44.854010 systemd-logind[1567]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:21:44.854750 systemd-logind[1567]: Removed session 16. Nov 1 00:21:44.824000 audit[5558]: CRED_DISP pid=5558 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:44.880893 kernel: audit: type=1106 audit(1761956504.823:506): pid=5558 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:44.881027 kernel: audit: type=1104 audit(1761956504.824:507): pid=5558 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:44.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.42:22-10.200.16.10:41890 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:48.354257 kubelet[2677]: E1101 00:21:48.354209 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:21:49.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.42:22-10.200.16.10:40878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:49.890716 systemd[1]: Started sshd@14-10.200.20.42:22-10.200.16.10:40878.service. Nov 1 00:21:49.896091 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:21:49.896194 kernel: audit: type=1130 audit(1761956509.890:509): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.42:22-10.200.16.10:40878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:50.339733 sshd[5575]: Accepted publickey for core from 10.200.16.10 port 40878 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:21:50.339000 audit[5575]: USER_ACCT pid=5575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:50.355628 kubelet[2677]: E1101 00:21:50.355577 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" podUID="da0e9dac-d5af-4669-8132-3ec847bb81ba" Nov 1 00:21:50.367917 sshd[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:21:50.367000 audit[5575]: CRED_ACQ pid=5575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:50.391531 kernel: audit: type=1101 audit(1761956510.339:510): pid=5575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:50.391623 kernel: audit: type=1103 audit(1761956510.367:511): pid=5575 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:50.408891 kernel: audit: type=1006 audit(1761956510.367:512): pid=5575 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Nov 1 00:21:50.411346 systemd[1]: Started session-17.scope. Nov 1 00:21:50.411560 systemd-logind[1567]: New session 17 of user core. Nov 1 00:21:50.367000 audit[5575]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd7fcf8d0 a2=3 a3=1 items=0 ppid=1 pid=5575 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:21:50.439017 kernel: audit: type=1300 audit(1761956510.367:512): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd7fcf8d0 a2=3 a3=1 items=0 ppid=1 pid=5575 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:21:50.441120 kernel: audit: type=1327 audit(1761956510.367:512): proctitle=737368643A20636F7265205B707269765D Nov 1 00:21:50.367000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:21:50.450000 audit[5575]: USER_START pid=5575 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:50.451000 audit[5582]: CRED_ACQ pid=5582 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:50.499115 kernel: audit: type=1105 audit(1761956510.450:513): pid=5575 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:50.499370 kernel: audit: type=1103 audit(1761956510.451:514): pid=5582 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:50.763232 sshd[5575]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:50.764000 audit[5575]: USER_END pid=5575 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:50.768581 systemd[1]: sshd@14-10.200.20.42:22-10.200.16.10:40878.service: Deactivated successfully. Nov 1 00:21:50.769408 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:21:50.770617 systemd-logind[1567]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:21:50.771583 systemd-logind[1567]: Removed session 17. Nov 1 00:21:50.764000 audit[5575]: CRED_DISP pid=5575 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:50.823068 kernel: audit: type=1106 audit(1761956510.764:515): pid=5575 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:50.823171 kernel: audit: type=1104 audit(1761956510.764:516): pid=5575 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:50.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.42:22-10.200.16.10:40878 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:54.784050 systemd[1]: run-containerd-runc-k8s.io-db6f76a2b7ab6ed2fb0e97e0b262e36f004b1750c5d73343a16030568d48efc1-runc.8CBaed.mount: Deactivated successfully. Nov 1 00:21:55.353258 kubelet[2677]: E1101 00:21:55.353224 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" podUID="ab7373cc-dd84-417d-8edc-59fbf979f4b4" Nov 1 00:21:55.354055 kubelet[2677]: E1101 00:21:55.354031 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pw8c5" podUID="1e69bd0a-b324-4064-9086-3d6aa0d23b51" Nov 1 00:21:55.841323 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:21:55.841472 kernel: audit: type=1130 audit(1761956515.830:518): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.42:22-10.200.16.10:40884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:55.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.42:22-10.200.16.10:40884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:21:55.830954 systemd[1]: Started sshd@15-10.200.20.42:22-10.200.16.10:40884.service. Nov 1 00:21:56.275435 sshd[5613]: Accepted publickey for core from 10.200.16.10 port 40884 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:21:56.274000 audit[5613]: USER_ACCT pid=5613 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:56.300000 audit[5613]: CRED_ACQ pid=5613 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:56.302274 sshd[5613]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:21:56.326173 kernel: audit: type=1101 audit(1761956516.274:519): pid=5613 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:56.326317 kernel: audit: type=1103 audit(1761956516.300:520): pid=5613 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:56.341719 kernel: audit: type=1006 audit(1761956516.300:521): pid=5613 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Nov 1 00:21:56.343334 kernel: audit: type=1300 audit(1761956516.300:521): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff6993960 a2=3 a3=1 items=0 ppid=1 pid=5613 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:21:56.300000 audit[5613]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff6993960 a2=3 a3=1 items=0 ppid=1 pid=5613 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:21:56.345958 systemd[1]: Started session-18.scope. Nov 1 00:21:56.346873 systemd-logind[1567]: New session 18 of user core. Nov 1 00:21:56.372142 kubelet[2677]: E1101 00:21:56.372098 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" podUID="57cd90f3-35a2-40bb-93fb-693c3ffcd73d" Nov 1 00:21:56.374784 kubelet[2677]: E1101 00:21:56.373704 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b964bb46-4rknd" podUID="7f2d9b8c-e77a-4876-aeb0-3b35b890f02a" Nov 1 00:21:56.300000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:21:56.383968 kernel: audit: type=1327 audit(1761956516.300:521): proctitle=737368643A20636F7265205B707269765D Nov 1 00:21:56.347000 audit[5613]: USER_START pid=5613 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:56.415322 kernel: audit: type=1105 audit(1761956516.347:522): pid=5613 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:56.369000 audit[5615]: CRED_ACQ pid=5615 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:56.437201 kernel: audit: type=1103 audit(1761956516.369:523): pid=5615 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:56.696923 sshd[5613]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:56.697000 audit[5613]: USER_END pid=5613 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:56.700592 systemd-logind[1567]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:21:56.701793 systemd[1]: sshd@15-10.200.20.42:22-10.200.16.10:40884.service: Deactivated successfully. Nov 1 00:21:56.702591 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:21:56.703842 systemd-logind[1567]: Removed session 18. Nov 1 00:21:56.697000 audit[5613]: CRED_DISP pid=5613 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:56.747961 kernel: audit: type=1106 audit(1761956516.697:524): pid=5613 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:56.748087 kernel: audit: type=1104 audit(1761956516.697:525): pid=5613 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:21:56.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.42:22-10.200.16.10:40884 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:01.353125 kubelet[2677]: E1101 00:22:01.353072 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:22:01.765137 systemd[1]: Started sshd@16-10.200.20.42:22-10.200.16.10:60352.service. Nov 1 00:22:01.793513 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:22:01.793632 kernel: audit: type=1130 audit(1761956521.765:527): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.42:22-10.200.16.10:60352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:01.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.42:22-10.200.16.10:60352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:02.187000 audit[5633]: USER_ACCT pid=5633 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:02.188984 sshd[5633]: Accepted publickey for core from 10.200.16.10 port 60352 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:22:02.190856 sshd[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:02.189000 audit[5633]: CRED_ACQ pid=5633 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:02.235381 kernel: audit: type=1101 audit(1761956522.187:528): pid=5633 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:02.235521 kernel: audit: type=1103 audit(1761956522.189:529): pid=5633 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:02.250134 kernel: audit: type=1006 audit(1761956522.189:530): pid=5633 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Nov 1 00:22:02.189000 audit[5633]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffebfc9df0 a2=3 a3=1 items=0 ppid=1 pid=5633 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:02.275292 kernel: audit: type=1300 audit(1761956522.189:530): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffebfc9df0 a2=3 a3=1 items=0 ppid=1 pid=5633 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:02.278406 systemd[1]: Started session-19.scope. Nov 1 00:22:02.279465 systemd-logind[1567]: New session 19 of user core. Nov 1 00:22:02.189000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:22:02.291846 kernel: audit: type=1327 audit(1761956522.189:530): proctitle=737368643A20636F7265205B707269765D Nov 1 00:22:02.291000 audit[5633]: USER_START pid=5633 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:02.324869 kernel: audit: type=1105 audit(1761956522.291:531): pid=5633 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:02.293000 audit[5636]: CRED_ACQ pid=5636 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:02.348255 kernel: audit: type=1103 audit(1761956522.293:532): pid=5636 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:02.687777 sshd[5633]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:02.687000 audit[5633]: USER_END pid=5633 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:02.715587 systemd[1]: sshd@16-10.200.20.42:22-10.200.16.10:60352.service: Deactivated successfully. Nov 1 00:22:02.717840 systemd-logind[1567]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:22:02.717842 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:22:02.687000 audit[5633]: CRED_DISP pid=5633 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:02.740067 kernel: audit: type=1106 audit(1761956522.687:533): pid=5633 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:02.740173 kernel: audit: type=1104 audit(1761956522.687:534): pid=5633 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:02.740326 systemd-logind[1567]: Removed session 19. Nov 1 00:22:02.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.42:22-10.200.16.10:60352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:04.353271 kubelet[2677]: E1101 00:22:04.353236 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" podUID="da0e9dac-d5af-4669-8132-3ec847bb81ba" Nov 1 00:22:06.353327 kubelet[2677]: E1101 00:22:06.353275 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pw8c5" podUID="1e69bd0a-b324-4064-9086-3d6aa0d23b51" Nov 1 00:22:07.755121 systemd[1]: Started sshd@17-10.200.20.42:22-10.200.16.10:60362.service. Nov 1 00:22:07.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.42:22-10.200.16.10:60362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:07.760671 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:22:07.760765 kernel: audit: type=1130 audit(1761956527.754:536): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.42:22-10.200.16.10:60362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.173000 audit[5650]: USER_ACCT pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:08.174784 sshd[5650]: Accepted publickey for core from 10.200.16.10 port 60362 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:22:08.198000 audit[5650]: CRED_ACQ pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:08.199820 sshd[5650]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:08.221760 kernel: audit: type=1101 audit(1761956528.173:537): pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:08.221878 kernel: audit: type=1103 audit(1761956528.198:538): pid=5650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:08.236780 kernel: audit: type=1006 audit(1761956528.198:539): pid=5650 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Nov 1 00:22:08.198000 audit[5650]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc0462ce0 a2=3 a3=1 items=0 ppid=1 pid=5650 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:08.261389 kernel: audit: type=1300 audit(1761956528.198:539): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc0462ce0 a2=3 a3=1 items=0 ppid=1 pid=5650 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:08.198000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:22:08.270946 kernel: audit: type=1327 audit(1761956528.198:539): proctitle=737368643A20636F7265205B707269765D Nov 1 00:22:08.271701 systemd-logind[1567]: New session 20 of user core. Nov 1 00:22:08.272065 systemd[1]: Started session-20.scope. Nov 1 00:22:08.280000 audit[5650]: USER_START pid=5650 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:08.306000 audit[5653]: CRED_ACQ pid=5653 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:08.332471 kernel: audit: type=1105 audit(1761956528.280:540): pid=5650 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:08.332556 kernel: audit: type=1103 audit(1761956528.306:541): pid=5653 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:08.353606 kubelet[2677]: E1101 00:22:08.353552 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" podUID="57cd90f3-35a2-40bb-93fb-693c3ffcd73d" Nov 1 00:22:08.354447 env[1586]: time="2025-11-01T00:22:08.354409667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:22:08.601457 env[1586]: time="2025-11-01T00:22:08.601405712Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:08.603988 env[1586]: time="2025-11-01T00:22:08.603934102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:22:08.604201 kubelet[2677]: E1101 00:22:08.604145 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:08.604274 kubelet[2677]: E1101 00:22:08.604211 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:08.604387 kubelet[2677]: E1101 00:22:08.604341 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kh9cr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8dcbbd64-p85vf_calico-apiserver(ab7373cc-dd84-417d-8edc-59fbf979f4b4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:08.605607 kubelet[2677]: E1101 00:22:08.605572 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" podUID="ab7373cc-dd84-417d-8edc-59fbf979f4b4" Nov 1 00:22:08.643538 sshd[5650]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:08.643000 audit[5650]: USER_END pid=5650 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:08.670655 systemd[1]: sshd@17-10.200.20.42:22-10.200.16.10:60362.service: Deactivated successfully. Nov 1 00:22:08.671732 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:22:08.671762 systemd-logind[1567]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:22:08.672728 systemd-logind[1567]: Removed session 20. Nov 1 00:22:08.643000 audit[5650]: CRED_DISP pid=5650 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:08.696665 kernel: audit: type=1106 audit(1761956528.643:542): pid=5650 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:08.696760 kernel: audit: type=1104 audit(1761956528.643:543): pid=5650 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:08.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.42:22-10.200.16.10:60362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.42:22-10.200.16.10:60374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:08.709539 systemd[1]: Started sshd@18-10.200.20.42:22-10.200.16.10:60374.service. Nov 1 00:22:09.129000 audit[5662]: USER_ACCT pid=5662 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:09.130551 sshd[5662]: Accepted publickey for core from 10.200.16.10 port 60374 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:22:09.130000 audit[5662]: CRED_ACQ pid=5662 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:09.130000 audit[5662]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd089bc10 a2=3 a3=1 items=0 ppid=1 pid=5662 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:09.130000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:22:09.132161 sshd[5662]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:09.136189 systemd-logind[1567]: New session 21 of user core. Nov 1 00:22:09.136643 systemd[1]: Started session-21.scope. Nov 1 00:22:09.141000 audit[5662]: USER_START pid=5662 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:09.142000 audit[5665]: CRED_ACQ pid=5665 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:09.353279 env[1586]: time="2025-11-01T00:22:09.353233815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:22:09.594860 env[1586]: time="2025-11-01T00:22:09.594793409Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:09.597597 env[1586]: time="2025-11-01T00:22:09.597550158Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:22:09.597981 kubelet[2677]: E1101 00:22:09.597774 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:22:09.597981 kubelet[2677]: E1101 00:22:09.597824 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:22:09.597981 kubelet[2677]: E1101 00:22:09.597931 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:3429d9572f3c4ccbba53eb23e40c8366,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rqvq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b964bb46-4rknd_calico-system(7f2d9b8c-e77a-4876-aeb0-3b35b890f02a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:09.600011 env[1586]: time="2025-11-01T00:22:09.599982669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:22:09.655255 sshd[5662]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:09.654000 audit[5662]: USER_END pid=5662 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:09.655000 audit[5662]: CRED_DISP pid=5662 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:09.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.42:22-10.200.16.10:60374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:09.657939 systemd-logind[1567]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:22:09.658115 systemd[1]: sshd@18-10.200.20.42:22-10.200.16.10:60374.service: Deactivated successfully. Nov 1 00:22:09.659118 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:22:09.659612 systemd-logind[1567]: Removed session 21. Nov 1 00:22:09.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.42:22-10.200.16.10:60388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:09.698387 systemd[1]: Started sshd@19-10.200.20.42:22-10.200.16.10:60388.service. Nov 1 00:22:09.832264 env[1586]: time="2025-11-01T00:22:09.832217060Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:09.835672 env[1586]: time="2025-11-01T00:22:09.835618007Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:22:09.836010 kubelet[2677]: E1101 00:22:09.835961 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:22:09.836118 kubelet[2677]: E1101 00:22:09.836019 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:22:09.836159 kubelet[2677]: E1101 00:22:09.836129 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqvq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6b964bb46-4rknd_calico-system(7f2d9b8c-e77a-4876-aeb0-3b35b890f02a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:09.837522 kubelet[2677]: E1101 00:22:09.837483 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b964bb46-4rknd" podUID="7f2d9b8c-e77a-4876-aeb0-3b35b890f02a" Nov 1 00:22:10.122000 audit[5673]: USER_ACCT pid=5673 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:10.124228 sshd[5673]: Accepted publickey for core from 10.200.16.10 port 60388 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:22:10.124000 audit[5673]: CRED_ACQ pid=5673 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:10.124000 audit[5673]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc32e9fa0 a2=3 a3=1 items=0 ppid=1 pid=5673 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:10.124000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:22:10.125711 sshd[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:10.130843 systemd-logind[1567]: New session 22 of user core. Nov 1 00:22:10.131273 systemd[1]: Started session-22.scope. Nov 1 00:22:10.137000 audit[5673]: USER_START pid=5673 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:10.139000 audit[5676]: CRED_ACQ pid=5676 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:11.036671 sshd[5673]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:11.036000 audit[5673]: USER_END pid=5673 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:11.036000 audit[5673]: CRED_DISP pid=5673 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:11.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.42:22-10.200.16.10:60388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.038997 systemd-logind[1567]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:22:11.039210 systemd[1]: sshd@19-10.200.20.42:22-10.200.16.10:60388.service: Deactivated successfully. Nov 1 00:22:11.040109 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:22:11.040560 systemd-logind[1567]: Removed session 22. Nov 1 00:22:11.042000 audit[5687]: NETFILTER_CFG table=filter:133 family=2 entries=26 op=nft_register_rule pid=5687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:22:11.042000 audit[5687]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffc2a6d940 a2=0 a3=1 items=0 ppid=2782 pid=5687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:11.042000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:22:11.051000 audit[5687]: NETFILTER_CFG table=nat:134 family=2 entries=20 op=nft_register_rule pid=5687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:22:11.051000 audit[5687]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc2a6d940 a2=0 a3=1 items=0 ppid=2782 pid=5687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:11.051000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:22:11.070000 audit[5691]: NETFILTER_CFG table=filter:135 family=2 entries=38 op=nft_register_rule pid=5691 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:22:11.070000 audit[5691]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd56ed130 a2=0 a3=1 items=0 ppid=2782 pid=5691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:11.070000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:22:11.075000 audit[5691]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=5691 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:22:11.075000 audit[5691]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd56ed130 a2=0 a3=1 items=0 ppid=2782 pid=5691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:11.075000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:22:11.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.42:22-10.200.16.10:47346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:11.102882 systemd[1]: Started sshd@20-10.200.20.42:22-10.200.16.10:47346.service. Nov 1 00:22:11.515000 audit[5692]: USER_ACCT pid=5692 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:11.517461 sshd[5692]: Accepted publickey for core from 10.200.16.10 port 47346 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:22:11.517000 audit[5692]: CRED_ACQ pid=5692 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:11.517000 audit[5692]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffc7968d0 a2=3 a3=1 items=0 ppid=1 pid=5692 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:11.517000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:22:11.519083 sshd[5692]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:11.523571 systemd[1]: Started session-23.scope. Nov 1 00:22:11.523913 systemd-logind[1567]: New session 23 of user core. Nov 1 00:22:11.527000 audit[5692]: USER_START pid=5692 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:11.528000 audit[5695]: CRED_ACQ pid=5695 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:12.131776 sshd[5692]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:12.131000 audit[5692]: USER_END pid=5692 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:12.131000 audit[5692]: CRED_DISP pid=5692 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:12.134171 systemd[1]: sshd@20-10.200.20.42:22-10.200.16.10:47346.service: Deactivated successfully. Nov 1 00:22:12.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.42:22-10.200.16.10:47346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.135417 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:22:12.135730 systemd-logind[1567]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:22:12.136540 systemd-logind[1567]: Removed session 23. Nov 1 00:22:12.197821 systemd[1]: Started sshd@21-10.200.20.42:22-10.200.16.10:47352.service. Nov 1 00:22:12.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.42:22-10.200.16.10:47352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:12.612000 audit[5704]: USER_ACCT pid=5704 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:12.613850 sshd[5704]: Accepted publickey for core from 10.200.16.10 port 47352 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:22:12.613000 audit[5704]: CRED_ACQ pid=5704 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:12.613000 audit[5704]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe11f8730 a2=3 a3=1 items=0 ppid=1 pid=5704 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:12.613000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:22:12.615138 sshd[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:12.619897 systemd[1]: Started session-24.scope. Nov 1 00:22:12.620084 systemd-logind[1567]: New session 24 of user core. Nov 1 00:22:12.624000 audit[5704]: USER_START pid=5704 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:12.625000 audit[5707]: CRED_ACQ pid=5707 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:12.980477 sshd[5704]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:12.990328 kernel: kauditd_printk_skb: 54 callbacks suppressed Nov 1 00:22:12.990502 kernel: audit: type=1106 audit(1761956532.981:582): pid=5704 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:12.981000 audit[5704]: USER_END pid=5704 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:12.983858 systemd[1]: sshd@21-10.200.20.42:22-10.200.16.10:47352.service: Deactivated successfully. Nov 1 00:22:12.984701 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:22:12.991626 systemd-logind[1567]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:22:12.992599 systemd-logind[1567]: Removed session 24. Nov 1 00:22:12.981000 audit[5704]: CRED_DISP pid=5704 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:13.035516 kernel: audit: type=1104 audit(1761956532.981:583): pid=5704 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:12.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.42:22-10.200.16.10:47352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:13.057307 kernel: audit: type=1131 audit(1761956532.982:584): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.42:22-10.200.16.10:47352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:13.353457 env[1586]: time="2025-11-01T00:22:13.353099843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:22:13.628278 env[1586]: time="2025-11-01T00:22:13.628039296Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:13.630849 env[1586]: time="2025-11-01T00:22:13.630783966Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:22:13.631069 kubelet[2677]: E1101 00:22:13.631026 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:22:13.631369 kubelet[2677]: E1101 00:22:13.631088 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:22:13.631493 kubelet[2677]: E1101 00:22:13.631208 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gnml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4mt97_calico-system(8e50a05e-0803-4e20-bd2b-ccf8c9d67c23): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:13.633734 env[1586]: time="2025-11-01T00:22:13.633551915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:22:13.859735 env[1586]: time="2025-11-01T00:22:13.859681317Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:13.863839 env[1586]: time="2025-11-01T00:22:13.863769141Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:22:13.864062 kubelet[2677]: E1101 00:22:13.864023 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:22:13.864135 kubelet[2677]: E1101 00:22:13.864090 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:22:13.864508 kubelet[2677]: E1101 00:22:13.864220 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gnml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4mt97_calico-system(8e50a05e-0803-4e20-bd2b-ccf8c9d67c23): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:13.865785 kubelet[2677]: E1101 00:22:13.865751 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:22:15.353424 kubelet[2677]: E1101 00:22:15.353379 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" podUID="da0e9dac-d5af-4669-8132-3ec847bb81ba" Nov 1 00:22:16.837000 audit[5718]: NETFILTER_CFG table=filter:137 family=2 entries=26 op=nft_register_rule pid=5718 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:22:16.837000 audit[5718]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffe0b0a50 a2=0 a3=1 items=0 ppid=2782 pid=5718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:16.880276 kernel: audit: type=1325 audit(1761956536.837:585): table=filter:137 family=2 entries=26 op=nft_register_rule pid=5718 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:22:16.880445 kernel: audit: type=1300 audit(1761956536.837:585): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffe0b0a50 a2=0 a3=1 items=0 ppid=2782 pid=5718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:16.837000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:22:16.893835 kernel: audit: type=1327 audit(1761956536.837:585): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:22:16.895000 audit[5718]: NETFILTER_CFG table=nat:138 family=2 entries=104 op=nft_register_chain pid=5718 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:22:16.895000 audit[5718]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=fffffe0b0a50 a2=0 a3=1 items=0 ppid=2782 pid=5718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:16.939659 kernel: audit: type=1325 audit(1761956536.895:586): table=nat:138 family=2 entries=104 op=nft_register_chain pid=5718 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:22:16.939752 kernel: audit: type=1300 audit(1761956536.895:586): arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=fffffe0b0a50 a2=0 a3=1 items=0 ppid=2782 pid=5718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:16.895000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:22:16.954359 kernel: audit: type=1327 audit(1761956536.895:586): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:22:18.049658 systemd[1]: Started sshd@22-10.200.20.42:22-10.200.16.10:47360.service. Nov 1 00:22:18.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.42:22-10.200.16.10:47360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:18.074310 kernel: audit: type=1130 audit(1761956538.048:587): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.42:22-10.200.16.10:47360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:18.482686 sshd[5719]: Accepted publickey for core from 10.200.16.10 port 47360 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:22:18.481000 audit[5719]: USER_ACCT pid=5719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:18.507922 sshd[5719]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:18.506000 audit[5719]: CRED_ACQ pid=5719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:18.534969 kernel: audit: type=1101 audit(1761956538.481:588): pid=5719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:18.535122 kernel: audit: type=1103 audit(1761956538.506:589): pid=5719 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:18.550503 kernel: audit: type=1006 audit(1761956538.506:590): pid=5719 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Nov 1 00:22:18.506000 audit[5719]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe28f7cc0 a2=3 a3=1 items=0 ppid=1 pid=5719 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.576766 kernel: audit: type=1300 audit(1761956538.506:590): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe28f7cc0 a2=3 a3=1 items=0 ppid=1 pid=5719 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:18.579840 systemd[1]: Started session-25.scope. Nov 1 00:22:18.580817 systemd-logind[1567]: New session 25 of user core. Nov 1 00:22:18.506000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:22:18.595358 kernel: audit: type=1327 audit(1761956538.506:590): proctitle=737368643A20636F7265205B707269765D Nov 1 00:22:18.584000 audit[5719]: USER_START pid=5719 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:18.627576 kernel: audit: type=1105 audit(1761956538.584:591): pid=5719 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:18.585000 audit[5722]: CRED_ACQ pid=5722 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:18.649603 kernel: audit: type=1103 audit(1761956538.585:592): pid=5722 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:18.910524 sshd[5719]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:18.910000 audit[5719]: USER_END pid=5719 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:18.937802 systemd[1]: sshd@22-10.200.20.42:22-10.200.16.10:47360.service: Deactivated successfully. Nov 1 00:22:18.938667 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:22:18.939774 systemd-logind[1567]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:22:18.940625 systemd-logind[1567]: Removed session 25. Nov 1 00:22:18.910000 audit[5719]: CRED_DISP pid=5719 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:18.968892 kernel: audit: type=1106 audit(1761956538.910:593): pid=5719 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:18.968984 kernel: audit: type=1104 audit(1761956538.910:594): pid=5719 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:18.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.42:22-10.200.16.10:47360 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:20.355461 env[1586]: time="2025-11-01T00:22:20.355413344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:22:20.356319 kubelet[2677]: E1101 00:22:20.356254 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" podUID="ab7373cc-dd84-417d-8edc-59fbf979f4b4" Nov 1 00:22:20.587701 env[1586]: time="2025-11-01T00:22:20.587623926Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:20.597074 env[1586]: time="2025-11-01T00:22:20.597003012Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:22:20.597344 kubelet[2677]: E1101 00:22:20.597306 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:22:20.597419 kubelet[2677]: E1101 00:22:20.597356 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:22:20.597566 kubelet[2677]: E1101 00:22:20.597501 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7dtq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-pw8c5_calico-system(1e69bd0a-b324-4064-9086-3d6aa0d23b51): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:20.599056 kubelet[2677]: E1101 00:22:20.599026 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pw8c5" podUID="1e69bd0a-b324-4064-9086-3d6aa0d23b51" Nov 1 00:22:22.353796 env[1586]: time="2025-11-01T00:22:22.353743804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:22:22.592862 env[1586]: time="2025-11-01T00:22:22.592805373Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:22.600734 env[1586]: time="2025-11-01T00:22:22.600680584Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:22:22.601102 kubelet[2677]: E1101 00:22:22.601047 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:22:22.601429 kubelet[2677]: E1101 00:22:22.601106 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:22:22.601683 kubelet[2677]: E1101 00:22:22.601254 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gzr2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86c5674785-bs7n8_calico-system(57cd90f3-35a2-40bb-93fb-693c3ffcd73d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:22.602952 kubelet[2677]: E1101 00:22:22.602921 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" podUID="57cd90f3-35a2-40bb-93fb-693c3ffcd73d" Nov 1 00:22:23.354331 kubelet[2677]: E1101 00:22:23.354255 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b964bb46-4rknd" podUID="7f2d9b8c-e77a-4876-aeb0-3b35b890f02a" Nov 1 00:22:24.003251 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:22:24.003390 kernel: audit: type=1130 audit(1761956543.975:596): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.42:22-10.200.16.10:50300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:23.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.42:22-10.200.16.10:50300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:23.977032 systemd[1]: Started sshd@23-10.200.20.42:22-10.200.16.10:50300.service. Nov 1 00:22:24.394000 audit[5732]: USER_ACCT pid=5732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:24.397672 sshd[5732]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:24.400882 sshd[5732]: Accepted publickey for core from 10.200.16.10 port 50300 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:22:24.396000 audit[5732]: CRED_ACQ pid=5732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:24.442699 kernel: audit: type=1101 audit(1761956544.394:597): pid=5732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:24.442853 kernel: audit: type=1103 audit(1761956544.396:598): pid=5732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:24.446414 systemd-logind[1567]: New session 26 of user core. Nov 1 00:22:24.447303 systemd[1]: Started session-26.scope. Nov 1 00:22:24.458384 kernel: audit: type=1006 audit(1761956544.396:599): pid=5732 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Nov 1 00:22:24.396000 audit[5732]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe4bc98a0 a2=3 a3=1 items=0 ppid=1 pid=5732 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:24.488698 kernel: audit: type=1300 audit(1761956544.396:599): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe4bc98a0 a2=3 a3=1 items=0 ppid=1 pid=5732 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:24.396000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:22:24.497425 kernel: audit: type=1327 audit(1761956544.396:599): proctitle=737368643A20636F7265205B707269765D Nov 1 00:22:24.450000 audit[5732]: USER_START pid=5732 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:24.523785 kernel: audit: type=1105 audit(1761956544.450:600): pid=5732 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:24.523985 kernel: audit: type=1103 audit(1761956544.458:601): pid=5735 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:24.458000 audit[5735]: CRED_ACQ pid=5735 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:24.799633 sshd[5732]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:24.799000 audit[5732]: USER_END pid=5732 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:24.802675 systemd[1]: sshd@23-10.200.20.42:22-10.200.16.10:50300.service: Deactivated successfully. Nov 1 00:22:24.803428 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 00:22:24.803951 systemd-logind[1567]: Session 26 logged out. Waiting for processes to exit. Nov 1 00:22:24.805055 systemd-logind[1567]: Removed session 26. Nov 1 00:22:24.799000 audit[5732]: CRED_DISP pid=5732 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:24.867771 kernel: audit: type=1106 audit(1761956544.799:602): pid=5732 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:24.867912 kernel: audit: type=1104 audit(1761956544.799:603): pid=5732 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:24.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.42:22-10.200.16.10:50300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:27.353541 kubelet[2677]: E1101 00:22:27.353492 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:22:28.354590 env[1586]: time="2025-11-01T00:22:28.354550859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:22:28.571462 env[1586]: time="2025-11-01T00:22:28.571411259Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:22:28.575325 env[1586]: time="2025-11-01T00:22:28.575252965Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:22:28.575527 kubelet[2677]: E1101 00:22:28.575482 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:28.575867 kubelet[2677]: E1101 00:22:28.575846 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:22:28.576477 kubelet[2677]: E1101 00:22:28.576063 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfrqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c8dcbbd64-qwkg7_calico-apiserver(da0e9dac-d5af-4669-8132-3ec847bb81ba): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:22:28.577783 kubelet[2677]: E1101 00:22:28.577739 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" podUID="da0e9dac-d5af-4669-8132-3ec847bb81ba" Nov 1 00:22:29.867560 systemd[1]: Started sshd@24-10.200.20.42:22-10.200.16.10:45392.service. Nov 1 00:22:29.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.42:22-10.200.16.10:45392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:29.873567 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:22:29.873652 kernel: audit: type=1130 audit(1761956549.867:605): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.42:22-10.200.16.10:45392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:30.286281 sshd[5769]: Accepted publickey for core from 10.200.16.10 port 45392 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:22:30.284000 audit[5769]: USER_ACCT pid=5769 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:30.311438 sshd[5769]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:30.309000 audit[5769]: CRED_ACQ pid=5769 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:30.317528 systemd[1]: Started session-27.scope. Nov 1 00:22:30.318542 systemd-logind[1567]: New session 27 of user core. Nov 1 00:22:30.336623 kernel: audit: type=1101 audit(1761956550.284:606): pid=5769 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:30.336724 kernel: audit: type=1103 audit(1761956550.309:607): pid=5769 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:30.356263 kernel: audit: type=1006 audit(1761956550.309:608): pid=5769 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Nov 1 00:22:30.309000 audit[5769]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcaf68ee0 a2=3 a3=1 items=0 ppid=1 pid=5769 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:30.382181 kernel: audit: type=1300 audit(1761956550.309:608): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcaf68ee0 a2=3 a3=1 items=0 ppid=1 pid=5769 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:30.309000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:22:30.395309 kernel: audit: type=1327 audit(1761956550.309:608): proctitle=737368643A20636F7265205B707269765D Nov 1 00:22:30.339000 audit[5769]: USER_START pid=5769 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:30.421513 kernel: audit: type=1105 audit(1761956550.339:609): pid=5769 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:30.341000 audit[5774]: CRED_ACQ pid=5774 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:30.450872 kernel: audit: type=1103 audit(1761956550.341:610): pid=5774 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:30.668415 sshd[5769]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:30.668000 audit[5769]: USER_END pid=5769 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:30.696533 systemd[1]: sshd@24-10.200.20.42:22-10.200.16.10:45392.service: Deactivated successfully. Nov 1 00:22:30.698107 systemd[1]: session-27.scope: Deactivated successfully. Nov 1 00:22:30.698700 systemd-logind[1567]: Session 27 logged out. Waiting for processes to exit. Nov 1 00:22:30.699611 systemd-logind[1567]: Removed session 27. Nov 1 00:22:30.679000 audit[5769]: CRED_DISP pid=5769 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:30.721463 kernel: audit: type=1106 audit(1761956550.668:611): pid=5769 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:30.721568 kernel: audit: type=1104 audit(1761956550.679:612): pid=5769 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:30.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.42:22-10.200.16.10:45392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:31.353157 kubelet[2677]: E1101 00:22:31.353119 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" podUID="ab7373cc-dd84-417d-8edc-59fbf979f4b4" Nov 1 00:22:34.355498 kubelet[2677]: E1101 00:22:34.355451 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pw8c5" podUID="1e69bd0a-b324-4064-9086-3d6aa0d23b51" Nov 1 00:22:35.354325 kubelet[2677]: E1101 00:22:35.354225 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b964bb46-4rknd" podUID="7f2d9b8c-e77a-4876-aeb0-3b35b890f02a" Nov 1 00:22:35.740168 systemd[1]: Started sshd@25-10.200.20.42:22-10.200.16.10:45394.service. Nov 1 00:22:35.768085 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:22:35.768187 kernel: audit: type=1130 audit(1761956555.740:614): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.42:22-10.200.16.10:45394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:35.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.42:22-10.200.16.10:45394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:36.186000 audit[5802]: USER_ACCT pid=5802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:36.187765 sshd[5802]: Accepted publickey for core from 10.200.16.10 port 45394 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:22:36.190460 sshd[5802]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:36.188000 audit[5802]: CRED_ACQ pid=5802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:36.240330 kernel: audit: type=1101 audit(1761956556.186:615): pid=5802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:36.240484 kernel: audit: type=1103 audit(1761956556.188:616): pid=5802 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:36.245660 systemd-logind[1567]: New session 28 of user core. Nov 1 00:22:36.246361 systemd[1]: Started session-28.scope. Nov 1 00:22:36.266076 kernel: audit: type=1006 audit(1761956556.188:617): pid=5802 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Nov 1 00:22:36.188000 audit[5802]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff65872b0 a2=3 a3=1 items=0 ppid=1 pid=5802 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:36.301724 kernel: audit: type=1300 audit(1761956556.188:617): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff65872b0 a2=3 a3=1 items=0 ppid=1 pid=5802 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:36.188000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:22:36.313178 kernel: audit: type=1327 audit(1761956556.188:617): proctitle=737368643A20636F7265205B707269765D Nov 1 00:22:36.269000 audit[5802]: USER_START pid=5802 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:36.344467 kernel: audit: type=1105 audit(1761956556.269:618): pid=5802 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:36.270000 audit[5805]: CRED_ACQ pid=5805 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:36.355517 kubelet[2677]: E1101 00:22:36.355474 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-86c5674785-bs7n8" podUID="57cd90f3-35a2-40bb-93fb-693c3ffcd73d" Nov 1 00:22:36.370161 kernel: audit: type=1103 audit(1761956556.270:619): pid=5805 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:36.606504 sshd[5802]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:36.606000 audit[5802]: USER_END pid=5802 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:36.611497 systemd[1]: sshd@25-10.200.20.42:22-10.200.16.10:45394.service: Deactivated successfully. Nov 1 00:22:36.612377 systemd[1]: session-28.scope: Deactivated successfully. Nov 1 00:22:36.613843 systemd-logind[1567]: Session 28 logged out. Waiting for processes to exit. Nov 1 00:22:36.614885 systemd-logind[1567]: Removed session 28. Nov 1 00:22:36.606000 audit[5802]: CRED_DISP pid=5802 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:36.656774 kernel: audit: type=1106 audit(1761956556.606:620): pid=5802 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:36.656901 kernel: audit: type=1104 audit(1761956556.606:621): pid=5802 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:36.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.200.20.42:22-10.200.16.10:45394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:41.354404 kubelet[2677]: E1101 00:22:41.354361 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4mt97" podUID="8e50a05e-0803-4e20-bd2b-ccf8c9d67c23" Nov 1 00:22:41.673001 systemd[1]: Started sshd@26-10.200.20.42:22-10.200.16.10:39490.service. Nov 1 00:22:41.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.20.42:22-10.200.16.10:39490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:41.679085 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:22:41.679171 kernel: audit: type=1130 audit(1761956561.672:623): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.20.42:22-10.200.16.10:39490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:42.091870 sshd[5816]: Accepted publickey for core from 10.200.16.10 port 39490 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:22:42.090000 audit[5816]: USER_ACCT pid=5816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:42.116571 sshd[5816]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:42.120958 systemd[1]: Started session-29.scope. Nov 1 00:22:42.121962 systemd-logind[1567]: New session 29 of user core. Nov 1 00:22:42.114000 audit[5816]: CRED_ACQ pid=5816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:42.142315 kernel: audit: type=1101 audit(1761956562.090:624): pid=5816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:42.142363 kernel: audit: type=1103 audit(1761956562.114:625): pid=5816 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:42.179812 kernel: audit: type=1006 audit(1761956562.114:626): pid=5816 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Nov 1 00:22:42.114000 audit[5816]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe7b2a6d0 a2=3 a3=1 items=0 ppid=1 pid=5816 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:42.205086 kernel: audit: type=1300 audit(1761956562.114:626): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe7b2a6d0 a2=3 a3=1 items=0 ppid=1 pid=5816 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:42.114000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:22:42.135000 audit[5816]: USER_START pid=5816 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:42.240743 kernel: audit: type=1327 audit(1761956562.114:626): proctitle=737368643A20636F7265205B707269765D Nov 1 00:22:42.240834 kernel: audit: type=1105 audit(1761956562.135:627): pid=5816 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:42.136000 audit[5819]: CRED_ACQ pid=5819 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:42.265279 kernel: audit: type=1103 audit(1761956562.136:628): pid=5819 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:42.353842 kubelet[2677]: E1101 00:22:42.353745 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-p85vf" podUID="ab7373cc-dd84-417d-8edc-59fbf979f4b4" Nov 1 00:22:42.354024 kubelet[2677]: E1101 00:22:42.353967 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c8dcbbd64-qwkg7" podUID="da0e9dac-d5af-4669-8132-3ec847bb81ba" Nov 1 00:22:42.513423 sshd[5816]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:42.512000 audit[5816]: USER_END pid=5816 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:42.543315 systemd-logind[1567]: Session 29 logged out. Waiting for processes to exit. Nov 1 00:22:42.543900 systemd[1]: sshd@26-10.200.20.42:22-10.200.16.10:39490.service: Deactivated successfully. Nov 1 00:22:42.544717 systemd[1]: session-29.scope: Deactivated successfully. Nov 1 00:22:42.545696 systemd-logind[1567]: Removed session 29. Nov 1 00:22:42.513000 audit[5816]: CRED_DISP pid=5816 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:42.584159 kernel: audit: type=1106 audit(1761956562.512:629): pid=5816 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:42.584260 kernel: audit: type=1104 audit(1761956562.513:630): pid=5816 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:42.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.200.20.42:22-10.200.16.10:39490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:47.353541 kubelet[2677]: E1101 00:22:47.353456 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6b964bb46-4rknd" podUID="7f2d9b8c-e77a-4876-aeb0-3b35b890f02a" Nov 1 00:22:47.620919 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:22:47.621041 kernel: audit: type=1130 audit(1761956567.591:632): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.20.42:22-10.200.16.10:39502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:47.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.20.42:22-10.200.16.10:39502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:47.593024 systemd[1]: Started sshd@27-10.200.20.42:22-10.200.16.10:39502.service. Nov 1 00:22:48.034267 sshd[5829]: Accepted publickey for core from 10.200.16.10 port 39502 ssh2: RSA SHA256:JyxYDfrWcSc3T/AgB8prmyzM4mqcWmvKVj9wIAiMWXI Nov 1 00:22:48.032000 audit[5829]: USER_ACCT pid=5829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:48.059093 sshd[5829]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:48.059451 kernel: audit: type=1101 audit(1761956568.032:633): pid=5829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:48.059498 kernel: audit: type=1103 audit(1761956568.057:634): pid=5829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:48.057000 audit[5829]: CRED_ACQ pid=5829 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:48.085623 systemd[1]: Started session-30.scope. Nov 1 00:22:48.086345 systemd-logind[1567]: New session 30 of user core. Nov 1 00:22:48.100298 kernel: audit: type=1006 audit(1761956568.057:635): pid=5829 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Nov 1 00:22:48.057000 audit[5829]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcb29bba0 a2=3 a3=1 items=0 ppid=1 pid=5829 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:48.124894 kernel: audit: type=1300 audit(1761956568.057:635): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcb29bba0 a2=3 a3=1 items=0 ppid=1 pid=5829 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:48.057000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:22:48.099000 audit[5829]: USER_START pid=5829 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:48.134359 kernel: audit: type=1327 audit(1761956568.057:635): proctitle=737368643A20636F7265205B707269765D Nov 1 00:22:48.099000 audit[5832]: CRED_ACQ pid=5832 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:48.182312 kernel: audit: type=1105 audit(1761956568.099:636): pid=5829 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:48.182410 kernel: audit: type=1103 audit(1761956568.099:637): pid=5832 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:48.421805 sshd[5829]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:48.421000 audit[5829]: USER_END pid=5829 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:48.424807 systemd-logind[1567]: Session 30 logged out. Waiting for processes to exit. Nov 1 00:22:48.432842 systemd[1]: sshd@27-10.200.20.42:22-10.200.16.10:39502.service: Deactivated successfully. Nov 1 00:22:48.433662 systemd[1]: session-30.scope: Deactivated successfully. Nov 1 00:22:48.435751 systemd-logind[1567]: Removed session 30. Nov 1 00:22:48.450410 kernel: audit: type=1106 audit(1761956568.421:638): pid=5829 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:48.422000 audit[5829]: CRED_DISP pid=5829 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:48.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.200.20.42:22-10.200.16.10:39502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:48.472370 kernel: audit: type=1104 audit(1761956568.422:639): pid=5829 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.16.10 addr=10.200.16.10 terminal=ssh res=success' Nov 1 00:22:49.353750 kubelet[2677]: E1101 00:22:49.353716 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-pw8c5" podUID="1e69bd0a-b324-4064-9086-3d6aa0d23b51"