Nov 1 00:42:05.030328 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Oct 31 23:02:53 -00 2025 Nov 1 00:42:05.030359 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:42:05.030378 kernel: BIOS-provided physical RAM map: Nov 1 00:42:05.030389 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 1 00:42:05.030400 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Nov 1 00:42:05.030411 kernel: BIOS-e820: [mem 0x00000000786ce000-0x000000007894dfff] reserved Nov 1 00:42:05.030425 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Nov 1 00:42:05.030437 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Nov 1 00:42:05.030451 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Nov 1 00:42:05.030463 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Nov 1 00:42:05.030474 kernel: NX (Execute Disable) protection: active Nov 1 00:42:05.030486 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable Nov 1 00:42:05.030498 kernel: e820: update [mem 0x76813018-0x7681be57] usable ==> usable Nov 1 00:42:05.030510 kernel: extended physical RAM map: Nov 1 00:42:05.030528 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 1 00:42:05.030541 kernel: reserve setup_data: [mem 0x0000000000100000-0x0000000076813017] usable Nov 1 00:42:05.030554 kernel: reserve setup_data: [mem 0x0000000076813018-0x000000007681be57] usable Nov 1 00:42:05.030567 kernel: reserve setup_data: [mem 0x000000007681be58-0x00000000786cdfff] usable Nov 1 00:42:05.030579 kernel: reserve setup_data: [mem 0x00000000786ce000-0x000000007894dfff] reserved Nov 1 00:42:05.030592 kernel: reserve setup_data: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Nov 1 00:42:05.030605 kernel: reserve setup_data: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Nov 1 00:42:05.030618 kernel: reserve setup_data: [mem 0x00000000789de000-0x000000007c97bfff] usable Nov 1 00:42:05.030630 kernel: reserve setup_data: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Nov 1 00:42:05.030643 kernel: efi: EFI v2.70 by EDK II Nov 1 00:42:05.030658 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x77003a98 Nov 1 00:42:05.030670 kernel: SMBIOS 2.7 present. Nov 1 00:42:05.030682 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Nov 1 00:42:05.030693 kernel: Hypervisor detected: KVM Nov 1 00:42:05.030706 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:42:05.030719 kernel: kvm-clock: cpu 0, msr 4a1a0001, primary cpu clock Nov 1 00:42:05.030731 kernel: kvm-clock: using sched offset of 4888291404 cycles Nov 1 00:42:05.030745 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:42:05.030758 kernel: tsc: Detected 2499.996 MHz processor Nov 1 00:42:05.030771 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:42:05.030820 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:42:05.030836 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Nov 1 00:42:05.030849 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:42:05.030862 kernel: Using GB pages for direct mapping Nov 1 00:42:05.030875 kernel: Secure boot disabled Nov 1 00:42:05.030888 kernel: ACPI: Early table checksum verification disabled Nov 1 00:42:05.030907 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Nov 1 00:42:05.030921 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Nov 1 00:42:05.030938 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 1 00:42:05.030952 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Nov 1 00:42:05.030965 kernel: ACPI: FACS 0x00000000789D0000 000040 Nov 1 00:42:05.030980 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Nov 1 00:42:05.030994 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 1 00:42:05.031008 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 1 00:42:05.031022 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Nov 1 00:42:05.031039 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Nov 1 00:42:05.031053 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 1 00:42:05.031067 kernel: ACPI: SSDT 0x0000000078952000 00007F (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Nov 1 00:42:05.031081 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Nov 1 00:42:05.031095 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Nov 1 00:42:05.031109 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Nov 1 00:42:05.031124 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Nov 1 00:42:05.031138 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Nov 1 00:42:05.031152 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Nov 1 00:42:05.031169 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Nov 1 00:42:05.031183 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Nov 1 00:42:05.031197 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Nov 1 00:42:05.031211 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Nov 1 00:42:05.031225 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x7895207e] Nov 1 00:42:05.031239 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Nov 1 00:42:05.031253 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 00:42:05.031267 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 00:42:05.031282 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Nov 1 00:42:05.031299 kernel: NUMA: Initialized distance table, cnt=1 Nov 1 00:42:05.031313 kernel: NODE_DATA(0) allocated [mem 0x7a8ef000-0x7a8f4fff] Nov 1 00:42:05.031327 kernel: Zone ranges: Nov 1 00:42:05.031341 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:42:05.031356 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Nov 1 00:42:05.031370 kernel: Normal empty Nov 1 00:42:05.031384 kernel: Movable zone start for each node Nov 1 00:42:05.031397 kernel: Early memory node ranges Nov 1 00:42:05.031412 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 1 00:42:05.031428 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Nov 1 00:42:05.031442 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Nov 1 00:42:05.031456 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Nov 1 00:42:05.031471 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:42:05.031484 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 1 00:42:05.031498 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Nov 1 00:42:05.031512 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Nov 1 00:42:05.031527 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 1 00:42:05.031540 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:42:05.031557 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Nov 1 00:42:05.031571 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:42:05.031586 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:42:05.031599 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:42:05.031614 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:42:05.031628 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:42:05.031643 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:42:05.031656 kernel: TSC deadline timer available Nov 1 00:42:05.031670 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:42:05.031687 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Nov 1 00:42:05.031701 kernel: Booting paravirtualized kernel on KVM Nov 1 00:42:05.031716 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:42:05.031730 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:42:05.031745 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Nov 1 00:42:05.031759 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Nov 1 00:42:05.031773 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:42:05.031808 kernel: kvm-guest: stealtime: cpu 0, msr 7a41c0c0 Nov 1 00:42:05.031822 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:42:05.031839 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:42:05.031853 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Nov 1 00:42:05.031867 kernel: Policy zone: DMA32 Nov 1 00:42:05.031884 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:42:05.031899 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 00:42:05.031913 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:42:05.031934 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 00:42:05.031949 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:42:05.031966 kernel: Memory: 1876636K/2037804K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 160908K reserved, 0K cma-reserved) Nov 1 00:42:05.031980 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:42:05.031994 kernel: Kernel/User page tables isolation: enabled Nov 1 00:42:05.032009 kernel: ftrace: allocating 34614 entries in 136 pages Nov 1 00:42:05.032022 kernel: ftrace: allocated 136 pages with 2 groups Nov 1 00:42:05.032037 kernel: rcu: Hierarchical RCU implementation. Nov 1 00:42:05.032052 kernel: rcu: RCU event tracing is enabled. Nov 1 00:42:05.032080 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:42:05.032095 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:42:05.032110 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:42:05.032125 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:42:05.032140 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:42:05.032157 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 1 00:42:05.032172 kernel: random: crng init done Nov 1 00:42:05.032187 kernel: Console: colour dummy device 80x25 Nov 1 00:42:05.032202 kernel: printk: console [tty0] enabled Nov 1 00:42:05.032217 kernel: printk: console [ttyS0] enabled Nov 1 00:42:05.032232 kernel: ACPI: Core revision 20210730 Nov 1 00:42:05.032248 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Nov 1 00:42:05.032266 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:42:05.032281 kernel: x2apic enabled Nov 1 00:42:05.032296 kernel: Switched APIC routing to physical x2apic. Nov 1 00:42:05.032311 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Nov 1 00:42:05.032326 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Nov 1 00:42:05.032341 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 1 00:42:05.032356 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 1 00:42:05.032373 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:42:05.032384 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:42:05.032395 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:42:05.032407 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 1 00:42:05.032419 kernel: RETBleed: Vulnerable Nov 1 00:42:05.032433 kernel: Speculative Store Bypass: Vulnerable Nov 1 00:42:05.032446 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:42:05.032461 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:42:05.032475 kernel: GDS: Unknown: Dependent on hypervisor status Nov 1 00:42:05.032489 kernel: active return thunk: its_return_thunk Nov 1 00:42:05.032503 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 00:42:05.032521 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:42:05.032535 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:42:05.032550 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:42:05.032564 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Nov 1 00:42:05.032578 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Nov 1 00:42:05.032593 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 1 00:42:05.032607 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 1 00:42:05.032621 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 1 00:42:05.032636 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 1 00:42:05.032650 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:42:05.032667 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Nov 1 00:42:05.032681 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Nov 1 00:42:05.032695 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Nov 1 00:42:05.032708 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Nov 1 00:42:05.032722 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Nov 1 00:42:05.032736 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Nov 1 00:42:05.032750 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Nov 1 00:42:05.032765 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:42:05.032800 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:42:05.032813 kernel: LSM: Security Framework initializing Nov 1 00:42:05.032828 kernel: SELinux: Initializing. Nov 1 00:42:05.032842 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:42:05.032860 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:42:05.032874 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 1 00:42:05.032889 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 1 00:42:05.032904 kernel: signal: max sigframe size: 3632 Nov 1 00:42:05.032918 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:42:05.032933 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 00:42:05.032948 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:42:05.032962 kernel: x86: Booting SMP configuration: Nov 1 00:42:05.032977 kernel: .... node #0, CPUs: #1 Nov 1 00:42:05.032991 kernel: kvm-clock: cpu 1, msr 4a1a0041, secondary cpu clock Nov 1 00:42:05.033009 kernel: kvm-guest: stealtime: cpu 1, msr 7a51c0c0 Nov 1 00:42:05.033025 kernel: Transient Scheduler Attacks: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 1 00:42:05.033041 kernel: Transient Scheduler Attacks: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 00:42:05.033055 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:42:05.033070 kernel: smpboot: Max logical packages: 1 Nov 1 00:42:05.033085 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Nov 1 00:42:05.033100 kernel: devtmpfs: initialized Nov 1 00:42:05.033114 kernel: x86/mm: Memory block size: 128MB Nov 1 00:42:05.033132 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Nov 1 00:42:05.033146 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:42:05.033160 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:42:05.033175 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:42:05.033190 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:42:05.033204 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:42:05.033218 kernel: audit: type=2000 audit(1761957724.759:1): state=initialized audit_enabled=0 res=1 Nov 1 00:42:05.033233 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:42:05.033248 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:42:05.033265 kernel: cpuidle: using governor menu Nov 1 00:42:05.033280 kernel: ACPI: bus type PCI registered Nov 1 00:42:05.033295 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:42:05.033310 kernel: dca service started, version 1.12.1 Nov 1 00:42:05.033325 kernel: PCI: Using configuration type 1 for base access Nov 1 00:42:05.033340 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:42:05.033354 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:42:05.033369 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:42:05.033384 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:42:05.033401 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:42:05.033416 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:42:05.033431 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 00:42:05.033446 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 00:42:05.033460 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 00:42:05.033475 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 1 00:42:05.033490 kernel: ACPI: Interpreter enabled Nov 1 00:42:05.033504 kernel: ACPI: PM: (supports S0 S5) Nov 1 00:42:05.033519 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:42:05.033536 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:42:05.033551 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Nov 1 00:42:05.033566 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:42:05.033805 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:42:05.033943 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Nov 1 00:42:05.033961 kernel: acpiphp: Slot [3] registered Nov 1 00:42:05.033975 kernel: acpiphp: Slot [4] registered Nov 1 00:42:05.033992 kernel: acpiphp: Slot [5] registered Nov 1 00:42:05.034006 kernel: acpiphp: Slot [6] registered Nov 1 00:42:05.034020 kernel: acpiphp: Slot [7] registered Nov 1 00:42:05.034034 kernel: acpiphp: Slot [8] registered Nov 1 00:42:05.034047 kernel: acpiphp: Slot [9] registered Nov 1 00:42:05.034061 kernel: acpiphp: Slot [10] registered Nov 1 00:42:05.034075 kernel: acpiphp: Slot [11] registered Nov 1 00:42:05.034088 kernel: acpiphp: Slot [12] registered Nov 1 00:42:05.034102 kernel: acpiphp: Slot [13] registered Nov 1 00:42:05.034116 kernel: acpiphp: Slot [14] registered Nov 1 00:42:05.034133 kernel: acpiphp: Slot [15] registered Nov 1 00:42:05.034145 kernel: acpiphp: Slot [16] registered Nov 1 00:42:05.034158 kernel: acpiphp: Slot [17] registered Nov 1 00:42:05.034171 kernel: acpiphp: Slot [18] registered Nov 1 00:42:05.034186 kernel: acpiphp: Slot [19] registered Nov 1 00:42:05.034199 kernel: acpiphp: Slot [20] registered Nov 1 00:42:05.034212 kernel: acpiphp: Slot [21] registered Nov 1 00:42:05.034225 kernel: acpiphp: Slot [22] registered Nov 1 00:42:05.034239 kernel: acpiphp: Slot [23] registered Nov 1 00:42:05.034255 kernel: acpiphp: Slot [24] registered Nov 1 00:42:05.034268 kernel: acpiphp: Slot [25] registered Nov 1 00:42:05.034282 kernel: acpiphp: Slot [26] registered Nov 1 00:42:05.034297 kernel: acpiphp: Slot [27] registered Nov 1 00:42:05.034311 kernel: acpiphp: Slot [28] registered Nov 1 00:42:05.034325 kernel: acpiphp: Slot [29] registered Nov 1 00:42:05.034339 kernel: acpiphp: Slot [30] registered Nov 1 00:42:05.034354 kernel: acpiphp: Slot [31] registered Nov 1 00:42:05.034369 kernel: PCI host bridge to bus 0000:00 Nov 1 00:42:05.034508 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:42:05.034624 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:42:05.034735 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:42:05.034898 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Nov 1 00:42:05.035015 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Nov 1 00:42:05.035131 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:42:05.035278 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 1 00:42:05.035421 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Nov 1 00:42:05.035560 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Nov 1 00:42:05.035694 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 1 00:42:05.035847 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Nov 1 00:42:05.035984 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Nov 1 00:42:05.036117 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Nov 1 00:42:05.036244 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Nov 1 00:42:05.036377 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Nov 1 00:42:05.036502 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Nov 1 00:42:05.036638 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Nov 1 00:42:05.036766 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Nov 1 00:42:05.036913 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Nov 1 00:42:05.037032 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Nov 1 00:42:05.037154 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:42:05.037292 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Nov 1 00:42:05.037412 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Nov 1 00:42:05.037535 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Nov 1 00:42:05.037656 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Nov 1 00:42:05.037673 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:42:05.037687 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:42:05.037704 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:42:05.037717 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:42:05.037731 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 1 00:42:05.037744 kernel: iommu: Default domain type: Translated Nov 1 00:42:05.037758 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:42:05.037886 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Nov 1 00:42:05.038007 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:42:05.038126 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Nov 1 00:42:05.038143 kernel: vgaarb: loaded Nov 1 00:42:05.038159 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:42:05.038173 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:42:05.038186 kernel: PTP clock support registered Nov 1 00:42:05.038199 kernel: Registered efivars operations Nov 1 00:42:05.038211 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:42:05.038224 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:42:05.038238 kernel: e820: reserve RAM buffer [mem 0x76813018-0x77ffffff] Nov 1 00:42:05.038253 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Nov 1 00:42:05.038267 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Nov 1 00:42:05.038284 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Nov 1 00:42:05.038299 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Nov 1 00:42:05.038313 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:42:05.038328 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:42:05.038343 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:42:05.038358 kernel: pnp: PnP ACPI init Nov 1 00:42:05.038372 kernel: pnp: PnP ACPI: found 5 devices Nov 1 00:42:05.038387 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:42:05.038402 kernel: NET: Registered PF_INET protocol family Nov 1 00:42:05.038418 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:42:05.038432 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 1 00:42:05.038446 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:42:05.038462 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:42:05.038477 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 1 00:42:05.038491 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 1 00:42:05.038506 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:42:05.038522 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:42:05.038539 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:42:05.038554 kernel: NET: Registered PF_XDP protocol family Nov 1 00:42:05.038683 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:42:05.038825 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:42:05.038944 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:42:05.039059 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Nov 1 00:42:05.039176 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Nov 1 00:42:05.039314 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 1 00:42:05.039450 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Nov 1 00:42:05.039473 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:42:05.039489 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 00:42:05.039504 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Nov 1 00:42:05.039519 kernel: clocksource: Switched to clocksource tsc Nov 1 00:42:05.039534 kernel: Initialise system trusted keyrings Nov 1 00:42:05.039549 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 1 00:42:05.039563 kernel: Key type asymmetric registered Nov 1 00:42:05.039578 kernel: Asymmetric key parser 'x509' registered Nov 1 00:42:05.039595 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 00:42:05.039610 kernel: io scheduler mq-deadline registered Nov 1 00:42:05.039624 kernel: io scheduler kyber registered Nov 1 00:42:05.039639 kernel: io scheduler bfq registered Nov 1 00:42:05.039654 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:42:05.039669 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:42:05.039684 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:42:05.039699 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:42:05.039713 kernel: i8042: Warning: Keylock active Nov 1 00:42:05.039730 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:42:05.039744 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:42:05.040018 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 1 00:42:05.040140 kernel: rtc_cmos 00:00: registered as rtc0 Nov 1 00:42:05.040260 kernel: rtc_cmos 00:00: setting system clock to 2025-11-01T00:42:04 UTC (1761957724) Nov 1 00:42:05.040379 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 1 00:42:05.040399 kernel: intel_pstate: CPU model not supported Nov 1 00:42:05.040414 kernel: efifb: probing for efifb Nov 1 00:42:05.040434 kernel: efifb: framebuffer at 0x80000000, using 1876k, total 1875k Nov 1 00:42:05.040450 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Nov 1 00:42:05.040465 kernel: efifb: scrolling: redraw Nov 1 00:42:05.040480 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 1 00:42:05.040495 kernel: Console: switching to colour frame buffer device 100x37 Nov 1 00:42:05.040511 kernel: fb0: EFI VGA frame buffer device Nov 1 00:42:05.040551 kernel: pstore: Registered efi as persistent store backend Nov 1 00:42:05.040569 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:42:05.040585 kernel: Segment Routing with IPv6 Nov 1 00:42:05.040604 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:42:05.040620 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:42:05.040636 kernel: Key type dns_resolver registered Nov 1 00:42:05.040652 kernel: IPI shorthand broadcast: enabled Nov 1 00:42:05.040668 kernel: sched_clock: Marking stable (410903078, 158583443)->(660336849, -90850328) Nov 1 00:42:05.040684 kernel: registered taskstats version 1 Nov 1 00:42:05.040700 kernel: Loading compiled-in X.509 certificates Nov 1 00:42:05.040716 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: f2055682e6899ad8548fd369019e7b47939b46a0' Nov 1 00:42:05.040732 kernel: Key type .fscrypt registered Nov 1 00:42:05.040750 kernel: Key type fscrypt-provisioning registered Nov 1 00:42:05.040769 kernel: pstore: Using crash dump compression: deflate Nov 1 00:42:05.040796 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:42:05.040811 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:42:05.040826 kernel: ima: No architecture policies found Nov 1 00:42:05.040842 kernel: clk: Disabling unused clocks Nov 1 00:42:05.040857 kernel: Freeing unused kernel image (initmem) memory: 47496K Nov 1 00:42:05.040873 kernel: Write protecting the kernel read-only data: 28672k Nov 1 00:42:05.040888 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Nov 1 00:42:05.040907 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Nov 1 00:42:05.040923 kernel: Run /init as init process Nov 1 00:42:05.040938 kernel: with arguments: Nov 1 00:42:05.040953 kernel: /init Nov 1 00:42:05.040968 kernel: with environment: Nov 1 00:42:05.040984 kernel: HOME=/ Nov 1 00:42:05.040999 kernel: TERM=linux Nov 1 00:42:05.041014 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 00:42:05.041034 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:42:05.041056 systemd[1]: Detected virtualization amazon. Nov 1 00:42:05.041073 systemd[1]: Detected architecture x86-64. Nov 1 00:42:05.041088 systemd[1]: Running in initrd. Nov 1 00:42:05.041104 systemd[1]: No hostname configured, using default hostname. Nov 1 00:42:05.041120 systemd[1]: Hostname set to . Nov 1 00:42:05.041137 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:42:05.041154 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:42:05.041173 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:42:05.041189 systemd[1]: Reached target cryptsetup.target. Nov 1 00:42:05.041205 systemd[1]: Reached target paths.target. Nov 1 00:42:05.041221 systemd[1]: Reached target slices.target. Nov 1 00:42:05.041239 systemd[1]: Reached target swap.target. Nov 1 00:42:05.041258 systemd[1]: Reached target timers.target. Nov 1 00:42:05.041275 systemd[1]: Listening on iscsid.socket. Nov 1 00:42:05.041291 systemd[1]: Listening on iscsiuio.socket. Nov 1 00:42:05.041308 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:42:05.041324 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:42:05.041341 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:42:05.041357 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:42:05.041374 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:42:05.041393 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:42:05.041408 systemd[1]: Reached target sockets.target. Nov 1 00:42:05.041424 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:42:05.041440 systemd[1]: Finished network-cleanup.service. Nov 1 00:42:05.041457 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:42:05.041473 systemd[1]: Starting systemd-journald.service... Nov 1 00:42:05.041489 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:42:05.041506 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:42:05.041522 systemd[1]: Starting systemd-resolved.service... Nov 1 00:42:05.041541 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 00:42:05.041564 systemd-journald[185]: Journal started Nov 1 00:42:05.041640 systemd-journald[185]: Runtime Journal (/run/log/journal/ec29b4b8cf1fb90d924910734d513851) is 4.8M, max 38.3M, 33.5M free. Nov 1 00:42:05.055684 systemd[1]: Started systemd-journald.service. Nov 1 00:42:05.055761 kernel: audit: type=1130 audit(1761957725.045:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:05.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:05.048050 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:42:05.067001 kernel: audit: type=1130 audit(1761957725.055:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:05.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:05.057310 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:42:05.070242 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:42:05.089507 kernel: audit: type=1130 audit(1761957725.056:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:05.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:05.071756 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 00:42:05.083927 systemd-modules-load[186]: Inserted module 'overlay' Nov 1 00:42:05.097971 kernel: audit: type=1130 audit(1761957725.085:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:05.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:05.088358 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:42:05.097017 systemd-resolved[187]: Positive Trust Anchors: Nov 1 00:42:05.097030 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:42:05.097089 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:42:05.101378 systemd-resolved[187]: Defaulting to hostname 'linux'. Nov 1 00:42:05.130670 kernel: audit: type=1130 audit(1761957725.109:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:05.130705 kernel: audit: type=1130 audit(1761957725.118:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:05.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:05.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:05.119003 systemd[1]: Started systemd-resolved.service. Nov 1 00:42:05.120424 systemd[1]: Reached target nss-lookup.target. Nov 1 00:42:05.128947 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 00:42:05.152810 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:42:05.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:05.153180 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 00:42:05.171430 kernel: audit: type=1130 audit(1761957725.152:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:05.171467 kernel: Bridge firewalling registered Nov 1 00:42:05.155609 systemd[1]: Starting dracut-cmdline.service... Nov 1 00:42:05.166926 systemd-modules-load[186]: Inserted module 'br_netfilter' Nov 1 00:42:05.181538 dracut-cmdline[203]: dracut-dracut-053 Nov 1 00:42:05.186184 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 00:42:05.198803 kernel: SCSI subsystem initialized Nov 1 00:42:05.220858 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:42:05.220931 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:42:05.223847 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 00:42:05.228381 systemd-modules-load[186]: Inserted module 'dm_multipath' Nov 1 00:42:05.230134 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:42:05.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:05.238889 kernel: audit: type=1130 audit(1761957725.230:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:05.233579 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:42:05.248925 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:42:05.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:05.258816 kernel: audit: type=1130 audit(1761957725.248:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:05.279808 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:42:05.298808 kernel: iscsi: registered transport (tcp) Nov 1 00:42:05.323564 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:42:05.323645 kernel: QLogic iSCSI HBA Driver Nov 1 00:42:05.356677 systemd[1]: Finished dracut-cmdline.service. Nov 1 00:42:05.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:05.358738 systemd[1]: Starting dracut-pre-udev.service... Nov 1 00:42:05.410840 kernel: raid6: avx512x4 gen() 17904 MB/s Nov 1 00:42:05.428831 kernel: raid6: avx512x4 xor() 8001 MB/s Nov 1 00:42:05.446817 kernel: raid6: avx512x2 gen() 17701 MB/s Nov 1 00:42:05.464818 kernel: raid6: avx512x2 xor() 24267 MB/s Nov 1 00:42:05.482816 kernel: raid6: avx512x1 gen() 17579 MB/s Nov 1 00:42:05.500815 kernel: raid6: avx512x1 xor() 21955 MB/s Nov 1 00:42:05.518827 kernel: raid6: avx2x4 gen() 17604 MB/s Nov 1 00:42:05.536824 kernel: raid6: avx2x4 xor() 7532 MB/s Nov 1 00:42:05.554830 kernel: raid6: avx2x2 gen() 17513 MB/s Nov 1 00:42:05.572815 kernel: raid6: avx2x2 xor() 18179 MB/s Nov 1 00:42:05.590828 kernel: raid6: avx2x1 gen() 13882 MB/s Nov 1 00:42:05.608817 kernel: raid6: avx2x1 xor() 15892 MB/s Nov 1 00:42:05.626824 kernel: raid6: sse2x4 gen() 9593 MB/s Nov 1 00:42:05.644813 kernel: raid6: sse2x4 xor() 6000 MB/s Nov 1 00:42:05.662813 kernel: raid6: sse2x2 gen() 10602 MB/s Nov 1 00:42:05.680825 kernel: raid6: sse2x2 xor() 6136 MB/s Nov 1 00:42:05.698827 kernel: raid6: sse2x1 gen() 9435 MB/s Nov 1 00:42:05.717082 kernel: raid6: sse2x1 xor() 4831 MB/s Nov 1 00:42:05.717132 kernel: raid6: using algorithm avx512x4 gen() 17904 MB/s Nov 1 00:42:05.717162 kernel: raid6: .... xor() 8001 MB/s, rmw enabled Nov 1 00:42:05.718184 kernel: raid6: using avx512x2 recovery algorithm Nov 1 00:42:05.732809 kernel: xor: automatically using best checksumming function avx Nov 1 00:42:05.836811 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Nov 1 00:42:05.845900 systemd[1]: Finished dracut-pre-udev.service. Nov 1 00:42:05.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:05.845000 audit: BPF prog-id=7 op=LOAD Nov 1 00:42:05.845000 audit: BPF prog-id=8 op=LOAD Nov 1 00:42:05.847455 systemd[1]: Starting systemd-udevd.service... Nov 1 00:42:05.860726 systemd-udevd[385]: Using default interface naming scheme 'v252'. Nov 1 00:42:05.866096 systemd[1]: Started systemd-udevd.service. Nov 1 00:42:05.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:05.870892 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 00:42:05.889059 dracut-pre-trigger[398]: rd.md=0: removing MD RAID activation Nov 1 00:42:05.921730 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 00:42:05.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:05.923209 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:42:05.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:05.968033 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:42:06.023802 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:42:06.069874 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:42:06.069952 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 1 00:42:06.071632 kernel: AES CTR mode by8 optimization enabled Nov 1 00:42:06.071681 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Nov 1 00:42:06.079239 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 1 00:42:06.089880 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 1 00:42:06.090070 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Nov 1 00:42:06.090223 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 1 00:42:06.090387 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:ba:38:40:ca:2d Nov 1 00:42:06.093702 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:42:06.093766 kernel: GPT:9289727 != 33554431 Nov 1 00:42:06.093799 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:42:06.096121 kernel: GPT:9289727 != 33554431 Nov 1 00:42:06.096175 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:42:06.098432 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 1 00:42:06.101479 (udev-worker)[439]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:42:06.157810 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (438) Nov 1 00:42:06.180026 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 00:42:06.194558 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 00:42:06.234004 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:42:06.244591 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 00:42:06.245477 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 00:42:06.252142 systemd[1]: Starting disk-uuid.service... Nov 1 00:42:06.259120 disk-uuid[594]: Primary Header is updated. Nov 1 00:42:06.259120 disk-uuid[594]: Secondary Entries is updated. Nov 1 00:42:06.259120 disk-uuid[594]: Secondary Header is updated. Nov 1 00:42:06.265805 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 1 00:42:06.272827 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 1 00:42:06.278800 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 1 00:42:07.282717 disk-uuid[595]: The operation has completed successfully. Nov 1 00:42:07.284092 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 1 00:42:07.460592 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:42:07.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.460713 systemd[1]: Finished disk-uuid.service. Nov 1 00:42:07.472859 systemd[1]: Starting verity-setup.service... Nov 1 00:42:07.492919 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 00:42:07.618171 systemd[1]: Found device dev-mapper-usr.device. Nov 1 00:42:07.620579 systemd[1]: Mounting sysusr-usr.mount... Nov 1 00:42:07.623552 systemd[1]: Finished verity-setup.service. Nov 1 00:42:07.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.742802 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 00:42:07.743543 systemd[1]: Mounted sysusr-usr.mount. Nov 1 00:42:07.744536 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 00:42:07.745581 systemd[1]: Starting ignition-setup.service... Nov 1 00:42:07.750048 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 00:42:07.774715 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:42:07.774799 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 1 00:42:07.774821 kernel: BTRFS info (device nvme0n1p6): has skinny extents Nov 1 00:42:07.786805 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 1 00:42:07.803228 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:42:07.814275 systemd[1]: Finished ignition-setup.service. Nov 1 00:42:07.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.816460 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 00:42:07.843861 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 00:42:07.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.843000 audit: BPF prog-id=9 op=LOAD Nov 1 00:42:07.846344 systemd[1]: Starting systemd-networkd.service... Nov 1 00:42:07.871531 systemd-networkd[1106]: lo: Link UP Nov 1 00:42:07.871545 systemd-networkd[1106]: lo: Gained carrier Nov 1 00:42:07.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.872769 systemd-networkd[1106]: Enumeration completed Nov 1 00:42:07.872920 systemd[1]: Started systemd-networkd.service. Nov 1 00:42:07.873344 systemd-networkd[1106]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:42:07.874271 systemd[1]: Reached target network.target. Nov 1 00:42:07.876229 systemd[1]: Starting iscsiuio.service... Nov 1 00:42:07.884840 systemd[1]: Started iscsiuio.service. Nov 1 00:42:07.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.885165 systemd-networkd[1106]: eth0: Link UP Nov 1 00:42:07.885171 systemd-networkd[1106]: eth0: Gained carrier Nov 1 00:42:07.887304 systemd[1]: Starting iscsid.service... Nov 1 00:42:07.892080 iscsid[1111]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:42:07.892080 iscsid[1111]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 00:42:07.892080 iscsid[1111]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 00:42:07.892080 iscsid[1111]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 00:42:07.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.901059 iscsid[1111]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:42:07.901059 iscsid[1111]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 00:42:07.896348 systemd[1]: Started iscsid.service. Nov 1 00:42:07.900066 systemd[1]: Starting dracut-initqueue.service... Nov 1 00:42:07.906803 systemd-networkd[1106]: eth0: DHCPv4 address 172.31.19.28/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 1 00:42:07.918268 systemd[1]: Finished dracut-initqueue.service. Nov 1 00:42:07.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.919107 systemd[1]: Reached target remote-fs-pre.target. Nov 1 00:42:07.920981 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:42:07.922174 systemd[1]: Reached target remote-fs.target. Nov 1 00:42:07.924664 systemd[1]: Starting dracut-pre-mount.service... Nov 1 00:42:07.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:07.935097 systemd[1]: Finished dracut-pre-mount.service. Nov 1 00:42:08.341023 ignition[1072]: Ignition 2.14.0 Nov 1 00:42:08.341037 ignition[1072]: Stage: fetch-offline Nov 1 00:42:08.341151 ignition[1072]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:42:08.341183 ignition[1072]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Nov 1 00:42:08.360356 ignition[1072]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:42:08.361102 ignition[1072]: Ignition finished successfully Nov 1 00:42:08.363714 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 00:42:08.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.365877 systemd[1]: Starting ignition-fetch.service... Nov 1 00:42:08.375371 ignition[1130]: Ignition 2.14.0 Nov 1 00:42:08.375384 ignition[1130]: Stage: fetch Nov 1 00:42:08.375581 ignition[1130]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:42:08.375615 ignition[1130]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Nov 1 00:42:08.384121 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:42:08.385285 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:42:08.393734 ignition[1130]: INFO : PUT result: OK Nov 1 00:42:08.396547 ignition[1130]: DEBUG : parsed url from cmdline: "" Nov 1 00:42:08.396547 ignition[1130]: INFO : no config URL provided Nov 1 00:42:08.396547 ignition[1130]: INFO : reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:42:08.396547 ignition[1130]: INFO : no config at "/usr/lib/ignition/user.ign" Nov 1 00:42:08.396547 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:42:08.400375 ignition[1130]: INFO : PUT result: OK Nov 1 00:42:08.400375 ignition[1130]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 1 00:42:08.400375 ignition[1130]: INFO : GET result: OK Nov 1 00:42:08.403659 ignition[1130]: DEBUG : parsing config with SHA512: 6232196f693304fc67287cb5588a8419538f93ccf59c40a3b4ed6860c662d37c8a107cd94f4190fe0c1720396ade577da818311cdc14a0842ef8cd25b0f594ff Nov 1 00:42:08.409207 unknown[1130]: fetched base config from "system" Nov 1 00:42:08.409217 unknown[1130]: fetched base config from "system" Nov 1 00:42:08.409856 ignition[1130]: fetch: fetch complete Nov 1 00:42:08.409222 unknown[1130]: fetched user config from "aws" Nov 1 00:42:08.409862 ignition[1130]: fetch: fetch passed Nov 1 00:42:08.409917 ignition[1130]: Ignition finished successfully Nov 1 00:42:08.414402 systemd[1]: Finished ignition-fetch.service. Nov 1 00:42:08.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.416261 systemd[1]: Starting ignition-kargs.service... Nov 1 00:42:08.427311 ignition[1136]: Ignition 2.14.0 Nov 1 00:42:08.427325 ignition[1136]: Stage: kargs Nov 1 00:42:08.427531 ignition[1136]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:42:08.427563 ignition[1136]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Nov 1 00:42:08.434832 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:42:08.435809 ignition[1136]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:42:08.440462 ignition[1136]: INFO : PUT result: OK Nov 1 00:42:08.449262 ignition[1136]: kargs: kargs passed Nov 1 00:42:08.449350 ignition[1136]: Ignition finished successfully Nov 1 00:42:08.451827 systemd[1]: Finished ignition-kargs.service. Nov 1 00:42:08.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.453953 systemd[1]: Starting ignition-disks.service... Nov 1 00:42:08.463236 ignition[1142]: Ignition 2.14.0 Nov 1 00:42:08.463249 ignition[1142]: Stage: disks Nov 1 00:42:08.463459 ignition[1142]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:42:08.463494 ignition[1142]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Nov 1 00:42:08.471333 ignition[1142]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:42:08.472351 ignition[1142]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:42:08.473064 ignition[1142]: INFO : PUT result: OK Nov 1 00:42:08.476537 ignition[1142]: disks: disks passed Nov 1 00:42:08.476621 ignition[1142]: Ignition finished successfully Nov 1 00:42:08.478107 systemd[1]: Finished ignition-disks.service. Nov 1 00:42:08.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.479291 systemd[1]: Reached target initrd-root-device.target. Nov 1 00:42:08.480337 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:42:08.481276 systemd[1]: Reached target local-fs.target. Nov 1 00:42:08.482313 systemd[1]: Reached target sysinit.target. Nov 1 00:42:08.483225 systemd[1]: Reached target basic.target. Nov 1 00:42:08.485563 systemd[1]: Starting systemd-fsck-root.service... Nov 1 00:42:08.515642 systemd-fsck[1150]: ROOT: clean, 637/553520 files, 56032/553472 blocks Nov 1 00:42:08.518902 systemd[1]: Finished systemd-fsck-root.service. Nov 1 00:42:08.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.520543 systemd[1]: Mounting sysroot.mount... Nov 1 00:42:08.539100 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 00:42:08.537902 systemd[1]: Mounted sysroot.mount. Nov 1 00:42:08.538534 systemd[1]: Reached target initrd-root-fs.target. Nov 1 00:42:08.548527 systemd[1]: Mounting sysroot-usr.mount... Nov 1 00:42:08.549558 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Nov 1 00:42:08.549601 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:42:08.549628 systemd[1]: Reached target ignition-diskful.target. Nov 1 00:42:08.552247 systemd[1]: Mounted sysroot-usr.mount. Nov 1 00:42:08.556330 systemd[1]: Starting initrd-setup-root.service... Nov 1 00:42:08.568161 initrd-setup-root[1171]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:42:08.580331 initrd-setup-root[1179]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:42:08.584997 initrd-setup-root[1187]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:42:08.590547 initrd-setup-root[1195]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:42:08.664772 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:42:08.691839 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1205) Nov 1 00:42:08.695885 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:42:08.696130 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 1 00:42:08.696153 kernel: BTRFS info (device nvme0n1p6): has skinny extents Nov 1 00:42:08.701206 systemd[1]: Finished initrd-setup-root.service. Nov 1 00:42:08.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.703134 systemd[1]: Starting ignition-mount.service... Nov 1 00:42:08.706988 systemd[1]: Starting sysroot-boot.service... Nov 1 00:42:08.714811 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 1 00:42:08.715822 bash[1231]: umount: /sysroot/usr/share/oem: not mounted. Nov 1 00:42:08.718585 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:42:08.730285 ignition[1232]: INFO : Ignition 2.14.0 Nov 1 00:42:08.731846 ignition[1232]: INFO : Stage: mount Nov 1 00:42:08.733185 ignition[1232]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:42:08.735501 ignition[1232]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Nov 1 00:42:08.747725 ignition[1232]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:42:08.748839 ignition[1232]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:42:08.750685 systemd[1]: Finished sysroot-boot.service. Nov 1 00:42:08.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.752593 ignition[1232]: INFO : PUT result: OK Nov 1 00:42:08.755645 ignition[1232]: INFO : mount: mount passed Nov 1 00:42:08.756806 ignition[1232]: INFO : Ignition finished successfully Nov 1 00:42:08.757080 systemd[1]: Finished ignition-mount.service. Nov 1 00:42:08.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:08.759185 systemd[1]: Starting ignition-files.service... Nov 1 00:42:08.777268 ignition[1242]: INFO : Ignition 2.14.0 Nov 1 00:42:08.777268 ignition[1242]: INFO : Stage: files Nov 1 00:42:08.779511 ignition[1242]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:42:08.779511 ignition[1242]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Nov 1 00:42:08.786749 ignition[1242]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:42:08.787670 ignition[1242]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:42:08.788804 ignition[1242]: INFO : PUT result: OK Nov 1 00:42:08.793128 ignition[1242]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:42:08.799636 ignition[1242]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:42:08.799636 ignition[1242]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:42:08.803672 ignition[1242]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:42:08.805327 ignition[1242]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:42:08.807561 unknown[1242]: wrote ssh authorized keys file for user: core Nov 1 00:42:08.808957 ignition[1242]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:42:08.810939 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:42:08.810939 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:42:08.810939 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:42:08.810939 ignition[1242]: INFO : GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 00:42:08.870164 ignition[1242]: INFO : GET result: OK Nov 1 00:42:09.063450 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:42:09.063450 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:42:09.066562 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:42:09.066562 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:42:09.066562 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:42:09.066562 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Nov 1 00:42:09.066562 ignition[1242]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:42:09.080623 ignition[1242]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem32017484" Nov 1 00:42:09.080623 ignition[1242]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem32017484": device or resource busy Nov 1 00:42:09.080623 ignition[1242]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem32017484", trying btrfs: device or resource busy Nov 1 00:42:09.080623 ignition[1242]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem32017484" Nov 1 00:42:09.091614 ignition[1242]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem32017484" Nov 1 00:42:09.091614 ignition[1242]: INFO : op(3): [started] unmounting "/mnt/oem32017484" Nov 1 00:42:09.091614 ignition[1242]: INFO : op(3): [finished] unmounting "/mnt/oem32017484" Nov 1 00:42:09.091614 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Nov 1 00:42:09.091614 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:42:09.091614 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:42:09.091614 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:42:09.091614 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:42:09.091614 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:42:09.091614 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:42:09.091614 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:42:09.091614 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:42:09.089241 systemd[1]: mnt-oem32017484.mount: Deactivated successfully. Nov 1 00:42:09.113513 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Nov 1 00:42:09.113513 ignition[1242]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:42:09.113513 ignition[1242]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2312705510" Nov 1 00:42:09.113513 ignition[1242]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2312705510": device or resource busy Nov 1 00:42:09.113513 ignition[1242]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2312705510", trying btrfs: device or resource busy Nov 1 00:42:09.113513 ignition[1242]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2312705510" Nov 1 00:42:09.113513 ignition[1242]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2312705510" Nov 1 00:42:09.113513 ignition[1242]: INFO : op(6): [started] unmounting "/mnt/oem2312705510" Nov 1 00:42:09.113513 ignition[1242]: INFO : op(6): [finished] unmounting "/mnt/oem2312705510" Nov 1 00:42:09.113513 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Nov 1 00:42:09.113513 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:42:09.113513 ignition[1242]: INFO : GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 00:42:09.544470 ignition[1242]: INFO : GET result: OK Nov 1 00:42:09.630185 systemd-networkd[1106]: eth0: Gained IPv6LL Nov 1 00:42:10.089069 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:42:10.092257 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Nov 1 00:42:10.092257 ignition[1242]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:42:10.101157 ignition[1242]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3335997571" Nov 1 00:42:10.103834 ignition[1242]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3335997571": device or resource busy Nov 1 00:42:10.103834 ignition[1242]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3335997571", trying btrfs: device or resource busy Nov 1 00:42:10.103834 ignition[1242]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3335997571" Nov 1 00:42:10.118736 ignition[1242]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3335997571" Nov 1 00:42:10.118736 ignition[1242]: INFO : op(9): [started] unmounting "/mnt/oem3335997571" Nov 1 00:42:10.118736 ignition[1242]: INFO : op(9): [finished] unmounting "/mnt/oem3335997571" Nov 1 00:42:10.118736 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Nov 1 00:42:10.118736 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Nov 1 00:42:10.118736 ignition[1242]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Nov 1 00:42:10.111628 systemd[1]: mnt-oem3335997571.mount: Deactivated successfully. Nov 1 00:42:10.134370 ignition[1242]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2759746059" Nov 1 00:42:10.134370 ignition[1242]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2759746059": device or resource busy Nov 1 00:42:10.134370 ignition[1242]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2759746059", trying btrfs: device or resource busy Nov 1 00:42:10.134370 ignition[1242]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2759746059" Nov 1 00:42:10.144790 ignition[1242]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2759746059" Nov 1 00:42:10.144790 ignition[1242]: INFO : op(c): [started] unmounting "/mnt/oem2759746059" Nov 1 00:42:10.148714 ignition[1242]: INFO : op(c): [finished] unmounting "/mnt/oem2759746059" Nov 1 00:42:10.148714 ignition[1242]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Nov 1 00:42:10.148714 ignition[1242]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Nov 1 00:42:10.148714 ignition[1242]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Nov 1 00:42:10.148714 ignition[1242]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Nov 1 00:42:10.148714 ignition[1242]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Nov 1 00:42:10.148714 ignition[1242]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Nov 1 00:42:10.148714 ignition[1242]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Nov 1 00:42:10.148714 ignition[1242]: INFO : files: op(13): [started] processing unit "nvidia.service" Nov 1 00:42:10.148714 ignition[1242]: INFO : files: op(13): [finished] processing unit "nvidia.service" Nov 1 00:42:10.148714 ignition[1242]: INFO : files: op(14): [started] processing unit "containerd.service" Nov 1 00:42:10.148714 ignition[1242]: INFO : files: op(14): op(15): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:42:10.148714 ignition[1242]: INFO : files: op(14): op(15): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:42:10.148714 ignition[1242]: INFO : files: op(14): [finished] processing unit "containerd.service" Nov 1 00:42:10.148714 ignition[1242]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Nov 1 00:42:10.148714 ignition[1242]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:42:10.148714 ignition[1242]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:42:10.148714 ignition[1242]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Nov 1 00:42:10.148714 ignition[1242]: INFO : files: op(18): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 00:42:10.224072 kernel: kauditd_printk_skb: 26 callbacks suppressed Nov 1 00:42:10.224109 kernel: audit: type=1130 audit(1761957730.178:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.224131 kernel: audit: type=1130 audit(1761957730.201:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.224150 kernel: audit: type=1131 audit(1761957730.202:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.224170 kernel: audit: type=1130 audit(1761957730.215:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.145651 systemd[1]: mnt-oem2759746059.mount: Deactivated successfully. Nov 1 00:42:10.225943 ignition[1242]: INFO : files: op(18): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 00:42:10.225943 ignition[1242]: INFO : files: op(19): [started] setting preset to enabled for "amazon-ssm-agent.service" Nov 1 00:42:10.225943 ignition[1242]: INFO : files: op(19): [finished] setting preset to enabled for "amazon-ssm-agent.service" Nov 1 00:42:10.225943 ignition[1242]: INFO : files: op(1a): [started] setting preset to enabled for "nvidia.service" Nov 1 00:42:10.225943 ignition[1242]: INFO : files: op(1a): [finished] setting preset to enabled for "nvidia.service" Nov 1 00:42:10.225943 ignition[1242]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:42:10.225943 ignition[1242]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:42:10.225943 ignition[1242]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:42:10.225943 ignition[1242]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:42:10.225943 ignition[1242]: INFO : files: files passed Nov 1 00:42:10.225943 ignition[1242]: INFO : Ignition finished successfully Nov 1 00:42:10.261056 kernel: audit: type=1130 audit(1761957730.248:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.261097 kernel: audit: type=1131 audit(1761957730.248:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.178816 systemd[1]: Finished ignition-files.service. Nov 1 00:42:10.186370 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 00:42:10.264181 initrd-setup-root-after-ignition[1268]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:42:10.193212 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 00:42:10.194382 systemd[1]: Starting ignition-quench.service... Nov 1 00:42:10.199108 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:42:10.199234 systemd[1]: Finished ignition-quench.service. Nov 1 00:42:10.205342 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 00:42:10.217201 systemd[1]: Reached target ignition-complete.target. Nov 1 00:42:10.226017 systemd[1]: Starting initrd-parse-etc.service... Nov 1 00:42:10.248925 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:42:10.249048 systemd[1]: Finished initrd-parse-etc.service. Nov 1 00:42:10.250522 systemd[1]: Reached target initrd-fs.target. Nov 1 00:42:10.261938 systemd[1]: Reached target initrd.target. Nov 1 00:42:10.263409 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 00:42:10.264485 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 00:42:10.277330 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 00:42:10.284376 kernel: audit: type=1130 audit(1761957730.276:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.278952 systemd[1]: Starting initrd-cleanup.service... Nov 1 00:42:10.292635 systemd[1]: Stopped target nss-lookup.target. Nov 1 00:42:10.293495 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 00:42:10.294886 systemd[1]: Stopped target timers.target. Nov 1 00:42:10.296223 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:42:10.302594 kernel: audit: type=1131 audit(1761957730.295:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.296442 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 00:42:10.297657 systemd[1]: Stopped target initrd.target. Nov 1 00:42:10.303531 systemd[1]: Stopped target basic.target. Nov 1 00:42:10.304828 systemd[1]: Stopped target ignition-complete.target. Nov 1 00:42:10.305985 systemd[1]: Stopped target ignition-diskful.target. Nov 1 00:42:10.307125 systemd[1]: Stopped target initrd-root-device.target. Nov 1 00:42:10.308569 systemd[1]: Stopped target remote-fs.target. Nov 1 00:42:10.309724 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 00:42:10.310934 systemd[1]: Stopped target sysinit.target. Nov 1 00:42:10.312260 systemd[1]: Stopped target local-fs.target. Nov 1 00:42:10.313454 systemd[1]: Stopped target local-fs-pre.target. Nov 1 00:42:10.314615 systemd[1]: Stopped target swap.target. Nov 1 00:42:10.322145 kernel: audit: type=1131 audit(1761957730.315:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.315730 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:42:10.316094 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 00:42:10.329652 kernel: audit: type=1131 audit(1761957730.322:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.317258 systemd[1]: Stopped target cryptsetup.target. Nov 1 00:42:10.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.322938 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:42:10.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.323158 systemd[1]: Stopped dracut-initqueue.service. Nov 1 00:42:10.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.347237 iscsid[1111]: iscsid shutting down. Nov 1 00:42:10.324519 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:42:10.324741 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 00:42:10.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.355707 ignition[1281]: INFO : Ignition 2.14.0 Nov 1 00:42:10.355707 ignition[1281]: INFO : Stage: umount Nov 1 00:42:10.355707 ignition[1281]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 00:42:10.355707 ignition[1281]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Nov 1 00:42:10.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.330644 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:42:10.369404 ignition[1281]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 1 00:42:10.369404 ignition[1281]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 1 00:42:10.369404 ignition[1281]: INFO : PUT result: OK Nov 1 00:42:10.369404 ignition[1281]: INFO : umount: umount passed Nov 1 00:42:10.369404 ignition[1281]: INFO : Ignition finished successfully Nov 1 00:42:10.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.330883 systemd[1]: Stopped ignition-files.service. Nov 1 00:42:10.333312 systemd[1]: Stopping ignition-mount.service... Nov 1 00:42:10.335027 systemd[1]: Stopping iscsid.service... Nov 1 00:42:10.336050 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:42:10.336248 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 00:42:10.344583 systemd[1]: Stopping sysroot-boot.service... Nov 1 00:42:10.345565 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:42:10.345822 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 00:42:10.348545 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:42:10.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.348733 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 00:42:10.354411 systemd[1]: iscsid.service: Deactivated successfully. Nov 1 00:42:10.354554 systemd[1]: Stopped iscsid.service. Nov 1 00:42:10.362825 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:42:10.362978 systemd[1]: Finished initrd-cleanup.service. Nov 1 00:42:10.370456 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:42:10.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.370581 systemd[1]: Stopped ignition-mount.service. Nov 1 00:42:10.373168 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:42:10.373247 systemd[1]: Stopped ignition-disks.service. Nov 1 00:42:10.374647 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:42:10.374715 systemd[1]: Stopped ignition-kargs.service. Nov 1 00:42:10.376412 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:42:10.376478 systemd[1]: Stopped ignition-fetch.service. Nov 1 00:42:10.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.377454 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:42:10.377512 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 00:42:10.378499 systemd[1]: Stopped target paths.target. Nov 1 00:42:10.379390 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:42:10.385857 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 00:42:10.386764 systemd[1]: Stopped target slices.target. Nov 1 00:42:10.388270 systemd[1]: Stopped target sockets.target. Nov 1 00:42:10.388851 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:42:10.388910 systemd[1]: Closed iscsid.socket. Nov 1 00:42:10.389530 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:42:10.389580 systemd[1]: Stopped ignition-setup.service. Nov 1 00:42:10.390234 systemd[1]: Stopping iscsiuio.service... Nov 1 00:42:10.396092 systemd[1]: iscsiuio.service: Deactivated successfully. Nov 1 00:42:10.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.396196 systemd[1]: Stopped iscsiuio.service. Nov 1 00:42:10.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.397376 systemd[1]: Stopped target network.target. Nov 1 00:42:10.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.398346 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:42:10.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.398406 systemd[1]: Closed iscsiuio.socket. Nov 1 00:42:10.399546 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:42:10.400857 systemd[1]: Stopping systemd-resolved.service... Nov 1 00:42:10.427000 audit: BPF prog-id=6 op=UNLOAD Nov 1 00:42:10.402045 systemd-networkd[1106]: eth0: DHCPv6 lease lost Nov 1 00:42:10.427000 audit: BPF prog-id=9 op=UNLOAD Nov 1 00:42:10.403394 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:42:10.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.403526 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:42:10.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.406407 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:42:10.406482 systemd[1]: Closed systemd-networkd.socket. Nov 1 00:42:10.408358 systemd[1]: Stopping network-cleanup.service... Nov 1 00:42:10.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.415020 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:42:10.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.415116 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 00:42:10.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.416746 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:42:10.416838 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:42:10.417886 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:42:10.417954 systemd[1]: Stopped systemd-modules-load.service. Nov 1 00:42:10.422926 systemd[1]: Stopping systemd-udevd.service... Nov 1 00:42:10.424284 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:42:10.424381 systemd[1]: Stopped systemd-resolved.service. Nov 1 00:42:10.431362 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:42:10.431500 systemd[1]: Stopped network-cleanup.service. Nov 1 00:42:10.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.432474 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:42:10.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.432603 systemd[1]: Stopped systemd-udevd.service. Nov 1 00:42:10.433659 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:42:10.433711 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 00:42:10.434478 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:42:10.434523 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 00:42:10.436050 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:42:10.436117 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 00:42:10.437132 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:42:10.437192 systemd[1]: Stopped dracut-cmdline.service. Nov 1 00:42:10.438288 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:42:10.438347 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 00:42:10.440531 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 00:42:10.450846 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:42:10.450947 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 00:42:10.452485 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:42:10.452619 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 00:42:10.654859 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:42:10.654977 systemd[1]: Stopped sysroot-boot.service. Nov 1 00:42:10.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.656369 systemd[1]: Reached target initrd-switch-root.target. Nov 1 00:42:10.657290 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:42:10.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:10.657353 systemd[1]: Stopped initrd-setup-root.service. Nov 1 00:42:10.659548 systemd[1]: Starting initrd-switch-root.service... Nov 1 00:42:10.665583 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:42:10.666598 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:42:10.669923 systemd[1]: Switching root. Nov 1 00:42:10.669000 audit: BPF prog-id=5 op=UNLOAD Nov 1 00:42:10.669000 audit: BPF prog-id=4 op=UNLOAD Nov 1 00:42:10.669000 audit: BPF prog-id=3 op=UNLOAD Nov 1 00:42:10.672000 audit: BPF prog-id=8 op=UNLOAD Nov 1 00:42:10.672000 audit: BPF prog-id=7 op=UNLOAD Nov 1 00:42:10.682207 systemd-journald[185]: Journal stopped Nov 1 00:42:15.444162 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Nov 1 00:42:15.444246 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 00:42:15.444268 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 00:42:15.444286 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 00:42:15.444303 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:42:15.444320 kernel: SELinux: policy capability open_perms=1 Nov 1 00:42:15.444346 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:42:15.444373 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:42:15.444390 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:42:15.444414 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:42:15.444436 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:42:15.444454 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:42:15.444473 systemd[1]: Successfully loaded SELinux policy in 76.039ms. Nov 1 00:42:15.444498 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.302ms. Nov 1 00:42:15.444521 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:42:15.444540 systemd[1]: Detected virtualization amazon. Nov 1 00:42:15.444558 systemd[1]: Detected architecture x86-64. Nov 1 00:42:15.444575 systemd[1]: Detected first boot. Nov 1 00:42:15.444596 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:42:15.444614 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 00:42:15.444633 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:42:15.444651 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:42:15.444671 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:42:15.444692 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:42:15.444713 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:42:15.444733 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. Nov 1 00:42:15.444751 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 00:42:15.444772 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 00:42:15.444803 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Nov 1 00:42:15.444822 systemd[1]: Created slice system-getty.slice. Nov 1 00:42:15.444840 systemd[1]: Created slice system-modprobe.slice. Nov 1 00:42:15.444859 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 00:42:15.444877 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 00:42:15.444899 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 00:42:15.444918 systemd[1]: Created slice user.slice. Nov 1 00:42:15.444935 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:42:15.444954 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 00:42:15.444972 systemd[1]: Set up automount boot.automount. Nov 1 00:42:15.444990 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 00:42:15.445009 systemd[1]: Reached target integritysetup.target. Nov 1 00:42:15.445027 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:42:15.445052 systemd[1]: Reached target remote-fs.target. Nov 1 00:42:15.445070 systemd[1]: Reached target slices.target. Nov 1 00:42:15.445088 systemd[1]: Reached target swap.target. Nov 1 00:42:15.445107 systemd[1]: Reached target torcx.target. Nov 1 00:42:15.445125 systemd[1]: Reached target veritysetup.target. Nov 1 00:42:15.445143 systemd[1]: Listening on systemd-coredump.socket. Nov 1 00:42:15.445161 systemd[1]: Listening on systemd-initctl.socket. Nov 1 00:42:15.445182 kernel: kauditd_printk_skb: 48 callbacks suppressed Nov 1 00:42:15.445214 kernel: audit: type=1400 audit(1761957735.199:88): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:42:15.445234 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:42:15.445252 kernel: audit: type=1335 audit(1761957735.199:89): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Nov 1 00:42:15.445267 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:42:15.445288 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:42:15.445306 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:42:15.448842 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:42:15.448889 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:42:15.448909 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 00:42:15.448929 systemd[1]: Mounting dev-hugepages.mount... Nov 1 00:42:15.448958 systemd[1]: Mounting dev-mqueue.mount... Nov 1 00:42:15.448976 systemd[1]: Mounting media.mount... Nov 1 00:42:15.448995 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:15.449020 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 00:42:15.449039 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 00:42:15.449063 systemd[1]: Mounting tmp.mount... Nov 1 00:42:15.449082 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 00:42:15.449101 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:42:15.449119 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:42:15.449138 systemd[1]: Starting modprobe@configfs.service... Nov 1 00:42:15.449157 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:42:15.449176 systemd[1]: Starting modprobe@drm.service... Nov 1 00:42:15.449198 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:42:15.449217 systemd[1]: Starting modprobe@fuse.service... Nov 1 00:42:15.449235 systemd[1]: Starting modprobe@loop.service... Nov 1 00:42:15.449255 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:42:15.449275 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 1 00:42:15.449295 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Nov 1 00:42:15.449314 systemd[1]: Starting systemd-journald.service... Nov 1 00:42:15.449333 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:42:15.449352 systemd[1]: Starting systemd-network-generator.service... Nov 1 00:42:15.449373 kernel: loop: module loaded Nov 1 00:42:15.449392 systemd[1]: Starting systemd-remount-fs.service... Nov 1 00:42:15.449411 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:42:15.449430 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:15.449449 systemd[1]: Mounted dev-hugepages.mount. Nov 1 00:42:15.449468 kernel: fuse: init (API version 7.34) Nov 1 00:42:15.449485 systemd[1]: Mounted dev-mqueue.mount. Nov 1 00:42:15.449504 systemd[1]: Mounted media.mount. Nov 1 00:42:15.449522 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 00:42:15.449545 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 00:42:15.449564 systemd[1]: Mounted tmp.mount. Nov 1 00:42:15.449583 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:42:15.449601 kernel: audit: type=1130 audit(1761957735.433:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.449619 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:42:15.449638 kernel: audit: type=1305 audit(1761957735.440:91): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 00:42:15.449664 systemd-journald[1428]: Journal started Nov 1 00:42:15.449743 systemd-journald[1428]: Runtime Journal (/run/log/journal/ec29b4b8cf1fb90d924910734d513851) is 4.8M, max 38.3M, 33.5M free. Nov 1 00:42:15.199000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 00:42:15.199000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Nov 1 00:42:15.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.440000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 00:42:15.453807 systemd[1]: Finished modprobe@configfs.service. Nov 1 00:42:15.440000 audit[1428]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffee599eb60 a2=4000 a3=7ffee599ebfc items=0 ppid=1 pid=1428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:15.467438 kernel: audit: type=1300 audit(1761957735.440:91): arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffee599eb60 a2=4000 a3=7ffee599ebfc items=0 ppid=1 pid=1428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:15.467536 systemd[1]: Started systemd-journald.service. Nov 1 00:42:15.470854 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:42:15.471121 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:42:15.473619 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:42:15.474193 systemd[1]: Finished modprobe@drm.service. Nov 1 00:42:15.440000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 00:42:15.476111 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:42:15.476364 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:42:15.484392 kernel: audit: type=1327 audit(1761957735.440:91): proctitle="/usr/lib/systemd/systemd-journald" Nov 1 00:42:15.484464 kernel: audit: type=1130 audit(1761957735.463:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.482281 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:42:15.482535 systemd[1]: Finished modprobe@fuse.service. Nov 1 00:42:15.484109 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:42:15.484367 systemd[1]: Finished modprobe@loop.service. Nov 1 00:42:15.493322 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:42:15.495082 systemd[1]: Finished systemd-network-generator.service. Nov 1 00:42:15.496841 systemd[1]: Finished systemd-remount-fs.service. Nov 1 00:42:15.499995 systemd[1]: Reached target network-pre.target. Nov 1 00:42:15.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.503787 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 00:42:15.513575 kernel: audit: type=1131 audit(1761957735.463:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.511971 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 00:42:15.513100 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:42:15.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.520074 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 00:42:15.523856 kernel: audit: type=1130 audit(1761957735.467:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.526630 systemd[1]: Starting systemd-journal-flush.service... Nov 1 00:42:15.532947 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:42:15.548409 kernel: audit: type=1130 audit(1761957735.471:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.539823 systemd[1]: Starting systemd-random-seed.service... Nov 1 00:42:15.545003 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:42:15.546879 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:42:15.554191 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 00:42:15.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.556054 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 00:42:15.557956 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 00:42:15.561136 systemd[1]: Starting systemd-sysusers.service... Nov 1 00:42:15.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.566872 systemd[1]: Finished systemd-random-seed.service. Nov 1 00:42:15.568182 systemd[1]: Reached target first-boot-complete.target. Nov 1 00:42:15.579753 systemd-journald[1428]: Time spent on flushing to /var/log/journal/ec29b4b8cf1fb90d924910734d513851 is 49.822ms for 1163 entries. Nov 1 00:42:15.579753 systemd-journald[1428]: System Journal (/var/log/journal/ec29b4b8cf1fb90d924910734d513851) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:42:15.637868 systemd-journald[1428]: Received client request to flush runtime journal. Nov 1 00:42:15.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.601155 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:42:15.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.639093 systemd[1]: Finished systemd-journal-flush.service. Nov 1 00:42:15.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.651125 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:42:15.653933 systemd[1]: Starting systemd-udev-settle.service... Nov 1 00:42:15.667544 udevadm[1483]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 00:42:15.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.674041 systemd[1]: Finished systemd-sysusers.service. Nov 1 00:42:15.676922 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:42:15.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:15.780385 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:42:16.237806 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 00:42:16.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:16.239824 systemd[1]: Starting systemd-udevd.service... Nov 1 00:42:16.263079 systemd-udevd[1488]: Using default interface naming scheme 'v252'. Nov 1 00:42:16.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:16.330122 systemd[1]: Started systemd-udevd.service. Nov 1 00:42:16.332441 systemd[1]: Starting systemd-networkd.service... Nov 1 00:42:16.351563 systemd[1]: Starting systemd-userdbd.service... Nov 1 00:42:16.404675 systemd[1]: Found device dev-ttyS0.device. Nov 1 00:42:16.410722 systemd[1]: Started systemd-userdbd.service. Nov 1 00:42:16.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:16.431058 (udev-worker)[1495]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:42:16.484814 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 1 00:42:16.512807 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:42:16.512904 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Nov 1 00:42:16.537854 kernel: ACPI: button: Sleep Button [SLPF] Nov 1 00:42:16.556132 systemd-networkd[1497]: lo: Link UP Nov 1 00:42:16.556144 systemd-networkd[1497]: lo: Gained carrier Nov 1 00:42:16.556747 systemd-networkd[1497]: Enumeration completed Nov 1 00:42:16.556932 systemd[1]: Started systemd-networkd.service. Nov 1 00:42:16.559652 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:42:16.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:16.561890 systemd-networkd[1497]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:42:16.572842 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:42:16.572992 systemd-networkd[1497]: eth0: Link UP Nov 1 00:42:16.573228 systemd-networkd[1497]: eth0: Gained carrier Nov 1 00:42:16.543000 audit[1500]: AVC avc: denied { confidentiality } for pid=1500 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 00:42:16.543000 audit[1500]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55d52828f060 a1=338ec a2=7ff4df1b1bc5 a3=5 items=110 ppid=1488 pid=1500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:16.543000 audit: CWD cwd="/" Nov 1 00:42:16.543000 audit: PATH item=0 name=(null) inode=43 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=1 name=(null) inode=14192 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=2 name=(null) inode=14192 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=3 name=(null) inode=14193 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=4 name=(null) inode=14192 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=5 name=(null) inode=14194 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=6 name=(null) inode=14192 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=7 name=(null) inode=14195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=8 name=(null) inode=14195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=9 name=(null) inode=14196 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=10 name=(null) inode=14195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=11 name=(null) inode=14197 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=12 name=(null) inode=14195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=13 name=(null) inode=14198 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=14 name=(null) inode=14195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=15 name=(null) inode=14199 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=16 name=(null) inode=14195 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=17 name=(null) inode=14200 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=18 name=(null) inode=14192 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=19 name=(null) inode=14201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=20 name=(null) inode=14201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=21 name=(null) inode=14202 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=22 name=(null) inode=14201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=23 name=(null) inode=14203 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=24 name=(null) inode=14201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=25 name=(null) inode=14204 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=26 name=(null) inode=14201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=27 name=(null) inode=14205 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=28 name=(null) inode=14201 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=29 name=(null) inode=14206 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=30 name=(null) inode=14192 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=31 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=32 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=33 name=(null) inode=14208 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=34 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=35 name=(null) inode=14209 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=36 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=37 name=(null) inode=14210 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=38 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=39 name=(null) inode=14211 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=40 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=41 name=(null) inode=14212 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=42 name=(null) inode=14192 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=43 name=(null) inode=14213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=44 name=(null) inode=14213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=45 name=(null) inode=14214 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=46 name=(null) inode=14213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=47 name=(null) inode=14215 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=48 name=(null) inode=14213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=49 name=(null) inode=14216 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=50 name=(null) inode=14213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=51 name=(null) inode=14217 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=52 name=(null) inode=14213 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=53 name=(null) inode=14218 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=54 name=(null) inode=43 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=55 name=(null) inode=14219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=56 name=(null) inode=14219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=57 name=(null) inode=14220 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=58 name=(null) inode=14219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=59 name=(null) inode=14221 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=60 name=(null) inode=14219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=61 name=(null) inode=14222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=62 name=(null) inode=14222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=63 name=(null) inode=14223 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=64 name=(null) inode=14222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=65 name=(null) inode=14224 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=66 name=(null) inode=14222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=67 name=(null) inode=14225 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=68 name=(null) inode=14222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=69 name=(null) inode=14226 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=70 name=(null) inode=14222 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=71 name=(null) inode=14227 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=72 name=(null) inode=14219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=73 name=(null) inode=14228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=74 name=(null) inode=14228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=75 name=(null) inode=14229 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=76 name=(null) inode=14228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=77 name=(null) inode=14230 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=78 name=(null) inode=14228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=79 name=(null) inode=14231 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=80 name=(null) inode=14228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=81 name=(null) inode=14232 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=82 name=(null) inode=14228 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=83 name=(null) inode=14233 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=84 name=(null) inode=14219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=85 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=86 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=87 name=(null) inode=14235 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=88 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=89 name=(null) inode=14236 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=90 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=91 name=(null) inode=14237 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=92 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=93 name=(null) inode=14238 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=94 name=(null) inode=14234 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=95 name=(null) inode=14239 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=96 name=(null) inode=14219 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=97 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=98 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=99 name=(null) inode=14241 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=100 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=101 name=(null) inode=14242 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=102 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=103 name=(null) inode=14243 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=104 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=105 name=(null) inode=14244 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=106 name=(null) inode=14240 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=107 name=(null) inode=14245 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PATH item=109 name=(null) inode=14246 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:42:16.543000 audit: PROCTITLE proctitle="(udev-worker)" Nov 1 00:42:16.585994 systemd-networkd[1497]: eth0: DHCPv4 address 172.31.19.28/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 1 00:42:16.614861 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Nov 1 00:42:16.620834 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Nov 1 00:42:16.628846 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:42:16.746143 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:42:16.747525 systemd[1]: Finished systemd-udev-settle.service. Nov 1 00:42:16.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:16.749704 systemd[1]: Starting lvm2-activation-early.service... Nov 1 00:42:16.785951 lvm[1604]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:42:16.814158 systemd[1]: Finished lvm2-activation-early.service. Nov 1 00:42:16.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:16.814854 systemd[1]: Reached target cryptsetup.target. Nov 1 00:42:16.816995 systemd[1]: Starting lvm2-activation.service... Nov 1 00:42:16.822950 lvm[1606]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:42:16.847179 systemd[1]: Finished lvm2-activation.service. Nov 1 00:42:16.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:16.847836 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:42:16.848375 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:42:16.848403 systemd[1]: Reached target local-fs.target. Nov 1 00:42:16.848877 systemd[1]: Reached target machines.target. Nov 1 00:42:16.850597 systemd[1]: Starting ldconfig.service... Nov 1 00:42:16.852428 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:42:16.852509 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:16.854366 systemd[1]: Starting systemd-boot-update.service... Nov 1 00:42:16.856897 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 00:42:16.860125 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 00:42:16.863499 systemd[1]: Starting systemd-sysext.service... Nov 1 00:42:16.880081 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1609 (bootctl) Nov 1 00:42:16.882072 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 00:42:16.891827 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 00:42:16.896979 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 00:42:16.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:16.900759 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 00:42:16.902428 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 00:42:16.924799 kernel: loop0: detected capacity change from 0 to 224512 Nov 1 00:42:16.997637 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:42:16.998875 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 00:42:16.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.031804 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:42:17.045808 kernel: loop1: detected capacity change from 0 to 224512 Nov 1 00:42:17.062551 (sd-sysext)[1625]: Using extensions 'kubernetes'. Nov 1 00:42:17.063842 (sd-sysext)[1625]: Merged extensions into '/usr'. Nov 1 00:42:17.083463 systemd-fsck[1621]: fsck.fat 4.2 (2021-01-31) Nov 1 00:42:17.083463 systemd-fsck[1621]: /dev/nvme0n1p1: 790 files, 120773/258078 clusters Nov 1 00:42:17.083698 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:17.085931 systemd[1]: Mounting usr-share-oem.mount... Nov 1 00:42:17.086760 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:42:17.088347 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:42:17.090230 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:42:17.092004 systemd[1]: Starting modprobe@loop.service... Nov 1 00:42:17.092835 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:42:17.093246 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:17.093398 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:17.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.097155 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:42:17.097326 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:42:17.100381 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:42:17.100559 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:42:17.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.111530 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 00:42:17.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.112546 systemd[1]: Mounted usr-share-oem.mount. Nov 1 00:42:17.113398 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:42:17.113567 systemd[1]: Finished modprobe@loop.service. Nov 1 00:42:17.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.119201 systemd[1]: Mounting boot.mount... Nov 1 00:42:17.119690 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:42:17.119913 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:42:17.130371 systemd[1]: Finished systemd-sysext.service. Nov 1 00:42:17.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.132456 systemd[1]: Starting ensure-sysext.service... Nov 1 00:42:17.134116 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 00:42:17.143430 systemd[1]: Reloading. Nov 1 00:42:17.159625 systemd-tmpfiles[1654]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 00:42:17.162541 systemd-tmpfiles[1654]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:42:17.165268 systemd-tmpfiles[1654]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:42:17.209902 /usr/lib/systemd/system-generators/torcx-generator[1677]: time="2025-11-01T00:42:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:42:17.210017 /usr/lib/systemd/system-generators/torcx-generator[1677]: time="2025-11-01T00:42:17Z" level=info msg="torcx already run" Nov 1 00:42:17.349634 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:42:17.349989 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:42:17.375292 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:42:17.462742 systemd[1]: Mounted boot.mount. Nov 1 00:42:17.484737 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:42:17.486700 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:42:17.489253 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:42:17.491945 systemd[1]: Starting modprobe@loop.service... Nov 1 00:42:17.493007 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:42:17.493219 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:17.496868 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:42:17.497065 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:42:17.497197 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:17.500632 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:42:17.500926 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:42:17.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.511040 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:42:17.511295 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:42:17.512701 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:42:17.512989 systemd[1]: Finished modprobe@loop.service. Nov 1 00:42:17.514240 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:42:17.514379 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:42:17.518866 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:42:17.528058 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:42:17.530916 systemd[1]: Starting modprobe@drm.service... Nov 1 00:42:17.536631 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:42:17.542571 systemd[1]: Starting modprobe@loop.service... Nov 1 00:42:17.545070 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:42:17.545311 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:17.547235 systemd[1]: Finished systemd-boot-update.service. Nov 1 00:42:17.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.554177 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:42:17.554408 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:42:17.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.556514 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:42:17.556725 systemd[1]: Finished modprobe@drm.service. Nov 1 00:42:17.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.560242 systemd[1]: Finished ensure-sysext.service. Nov 1 00:42:17.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.561576 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:42:17.561837 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:42:17.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.564162 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:42:17.578107 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:42:17.578325 systemd[1]: Finished modprobe@loop.service. Nov 1 00:42:17.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.578935 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:42:17.655356 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 00:42:17.657360 systemd[1]: Starting audit-rules.service... Nov 1 00:42:17.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.659628 systemd[1]: Starting clean-ca-certificates.service... Nov 1 00:42:17.662112 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 00:42:17.668065 systemd[1]: Starting systemd-resolved.service... Nov 1 00:42:17.670537 systemd[1]: Starting systemd-timesyncd.service... Nov 1 00:42:17.675108 systemd[1]: Starting systemd-update-utmp.service... Nov 1 00:42:17.677282 systemd[1]: Finished clean-ca-certificates.service. Nov 1 00:42:17.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.683255 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:42:17.696000 audit[1763]: SYSTEM_BOOT pid=1763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.699954 systemd[1]: Finished systemd-update-utmp.service. Nov 1 00:42:17.762309 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 00:42:17.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:17.789766 augenrules[1779]: No rules Nov 1 00:42:17.787000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:42:17.787000 audit[1779]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd3e4fe240 a2=420 a3=0 items=0 ppid=1756 pid=1779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:17.787000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 00:42:17.790869 systemd[1]: Finished audit-rules.service. Nov 1 00:42:17.837646 systemd[1]: Started systemd-timesyncd.service. Nov 1 00:42:17.838411 systemd[1]: Reached target time-set.target. Nov 1 00:42:17.846962 systemd-resolved[1760]: Positive Trust Anchors: Nov 1 00:42:17.846980 systemd-resolved[1760]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:42:17.847013 systemd-resolved[1760]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:42:17.882253 ldconfig[1608]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:42:17.891147 systemd[1]: Finished ldconfig.service. Nov 1 00:42:17.893111 systemd[1]: Starting systemd-update-done.service... Nov 1 00:42:17.896443 systemd-resolved[1760]: Defaulting to hostname 'linux'. Nov 1 00:42:17.898423 systemd[1]: Started systemd-resolved.service. Nov 1 00:42:17.898893 systemd[1]: Reached target network.target. Nov 1 00:42:17.899224 systemd[1]: Reached target nss-lookup.target. Nov 1 00:42:17.900751 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:17.900775 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:42:17.902465 systemd[1]: Finished systemd-update-done.service. Nov 1 00:42:17.902944 systemd[1]: Reached target sysinit.target. Nov 1 00:42:17.903345 systemd[1]: Started motdgen.path. Nov 1 00:42:17.903683 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 00:42:17.904298 systemd[1]: Started logrotate.timer. Nov 1 00:42:17.904710 systemd[1]: Started mdadm.timer. Nov 1 00:42:17.905066 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 00:42:17.905383 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:42:17.905410 systemd[1]: Reached target paths.target. Nov 1 00:42:17.905718 systemd[1]: Reached target timers.target. Nov 1 00:42:17.906516 systemd[1]: Listening on dbus.socket. Nov 1 00:42:17.908034 systemd[1]: Starting docker.socket... Nov 1 00:42:17.910303 systemd[1]: Listening on sshd.socket. Nov 1 00:42:17.911468 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:17.912513 systemd[1]: Listening on docker.socket. Nov 1 00:42:17.913219 systemd[1]: Reached target sockets.target. Nov 1 00:42:17.913828 systemd[1]: Reached target basic.target. Nov 1 00:42:17.914614 systemd[1]: System is tainted: cgroupsv1 Nov 1 00:42:17.914688 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:42:17.914723 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:42:17.916270 systemd[1]: Starting containerd.service... Nov 1 00:42:17.918389 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Nov 1 00:42:17.920770 systemd[1]: Starting dbus.service... Nov 1 00:42:17.924609 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 00:42:17.927106 systemd[1]: Starting extend-filesystems.service... Nov 1 00:42:17.928261 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 00:42:17.930150 systemd[1]: Starting motdgen.service... Nov 1 00:42:17.937607 systemd[1]: Starting prepare-helm.service... Nov 1 00:42:17.940193 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 00:42:17.945434 systemd[1]: Starting sshd-keygen.service... Nov 1 00:42:17.950732 systemd[1]: Starting systemd-logind.service... Nov 1 00:42:17.951971 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:42:17.952098 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:42:17.956338 systemd[1]: Starting update-engine.service... Nov 1 00:42:17.958707 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 00:42:17.966101 jq[1795]: false Nov 1 00:42:17.971852 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:42:17.972208 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 00:42:17.990146 jq[1805]: true Nov 1 00:42:17.992050 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:42:17.992388 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 00:42:18.021816 tar[1811]: linux-amd64/LICENSE Nov 1 00:42:18.021816 tar[1811]: linux-amd64/helm Nov 1 00:42:18.045222 jq[1818]: true Nov 1 00:42:18.066415 extend-filesystems[1796]: Found loop1 Nov 1 00:42:18.067483 extend-filesystems[1796]: Found nvme0n1 Nov 1 00:42:18.067483 extend-filesystems[1796]: Found nvme0n1p1 Nov 1 00:42:18.067483 extend-filesystems[1796]: Found nvme0n1p2 Nov 1 00:42:18.067483 extend-filesystems[1796]: Found nvme0n1p3 Nov 1 00:42:18.067483 extend-filesystems[1796]: Found usr Nov 1 00:42:18.067483 extend-filesystems[1796]: Found nvme0n1p4 Nov 1 00:42:18.067483 extend-filesystems[1796]: Found nvme0n1p6 Nov 1 00:42:18.067483 extend-filesystems[1796]: Found nvme0n1p7 Nov 1 00:42:18.067483 extend-filesystems[1796]: Found nvme0n1p9 Nov 1 00:42:18.067483 extend-filesystems[1796]: Checking size of /dev/nvme0n1p9 Nov 1 00:42:18.099028 dbus-daemon[1794]: [system] SELinux support is enabled Nov 1 00:42:18.100609 systemd[1]: Started dbus.service. Nov 1 00:42:18.930840 systemd-timesyncd[1761]: Contacted time server 162.243.25.188:123 (0.flatcar.pool.ntp.org). Nov 1 00:42:18.930906 systemd-timesyncd[1761]: Initial clock synchronization to Sat 2025-11-01 00:42:18.930701 UTC. Nov 1 00:42:18.930958 systemd-resolved[1760]: Clock change detected. Flushing caches. Nov 1 00:42:18.932381 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:42:18.932442 systemd[1]: Reached target system-config.target. Nov 1 00:42:18.933040 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:42:18.933072 systemd[1]: Reached target user-config.target. Nov 1 00:42:18.937954 dbus-daemon[1794]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1497 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 1 00:42:18.943416 dbus-daemon[1794]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 00:42:18.945103 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:42:18.945492 systemd[1]: Finished motdgen.service. Nov 1 00:42:18.950810 systemd[1]: Starting systemd-hostnamed.service... Nov 1 00:42:18.962243 extend-filesystems[1796]: Resized partition /dev/nvme0n1p9 Nov 1 00:42:18.978810 extend-filesystems[1854]: resize2fs 1.46.5 (30-Dec-2021) Nov 1 00:42:18.993363 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Nov 1 00:42:19.002887 update_engine[1804]: I1101 00:42:19.002063 1804 main.cc:92] Flatcar Update Engine starting Nov 1 00:42:19.009557 systemd[1]: Started update-engine.service. Nov 1 00:42:19.010644 update_engine[1804]: I1101 00:42:19.009614 1804 update_check_scheduler.cc:74] Next update check in 4m14s Nov 1 00:42:19.012725 systemd[1]: Started locksmithd.service. Nov 1 00:42:19.088223 bash[1855]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:42:19.089352 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 00:42:19.092082 env[1822]: time="2025-11-01T00:42:19.092024742Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 00:42:19.111357 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Nov 1 00:42:19.134112 extend-filesystems[1854]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 1 00:42:19.134112 extend-filesystems[1854]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 1 00:42:19.134112 extend-filesystems[1854]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Nov 1 00:42:19.137953 extend-filesystems[1796]: Resized filesystem in /dev/nvme0n1p9 Nov 1 00:42:19.135083 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:42:19.135423 systemd[1]: Finished extend-filesystems.service. Nov 1 00:42:19.160500 systemd-networkd[1497]: eth0: Gained IPv6LL Nov 1 00:42:19.164541 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:42:19.165465 systemd[1]: Reached target network-online.target. Nov 1 00:42:19.167992 systemd[1]: Started amazon-ssm-agent.service. Nov 1 00:42:19.172102 systemd[1]: Starting kubelet.service... Nov 1 00:42:19.174751 systemd[1]: Started nvidia.service. Nov 1 00:42:19.233797 systemd-logind[1803]: Watching system buttons on /dev/input/event1 (Power Button) Nov 1 00:42:19.234432 systemd-logind[1803]: Watching system buttons on /dev/input/event2 (Sleep Button) Nov 1 00:42:19.234572 systemd-logind[1803]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:42:19.234867 systemd-logind[1803]: New seat seat0. Nov 1 00:42:19.241519 systemd[1]: Started systemd-logind.service. Nov 1 00:42:19.401485 env[1822]: time="2025-11-01T00:42:19.401399220Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:42:19.415037 amazon-ssm-agent[1885]: 2025/11/01 00:42:19 Failed to load instance info from vault. RegistrationKey does not exist. Nov 1 00:42:19.417647 env[1822]: time="2025-11-01T00:42:19.417572512Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:19.421915 env[1822]: time="2025-11-01T00:42:19.421852643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:42:19.425474 amazon-ssm-agent[1885]: Initializing new seelog logger Nov 1 00:42:19.436330 env[1822]: time="2025-11-01T00:42:19.436267535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:19.438686 amazon-ssm-agent[1885]: New Seelog Logger Creation Complete Nov 1 00:42:19.440611 amazon-ssm-agent[1885]: 2025/11/01 00:42:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 1 00:42:19.440734 amazon-ssm-agent[1885]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 1 00:42:19.441107 amazon-ssm-agent[1885]: 2025/11/01 00:42:19 processing appconfig overrides Nov 1 00:42:19.444373 env[1822]: time="2025-11-01T00:42:19.444291043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:42:19.444572 env[1822]: time="2025-11-01T00:42:19.444551073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:19.444693 env[1822]: time="2025-11-01T00:42:19.444673841Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 00:42:19.444792 env[1822]: time="2025-11-01T00:42:19.444775774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:19.445053 env[1822]: time="2025-11-01T00:42:19.445032358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:19.445577 env[1822]: time="2025-11-01T00:42:19.445552552Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:42:19.446094 env[1822]: time="2025-11-01T00:42:19.446053147Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:42:19.446199 env[1822]: time="2025-11-01T00:42:19.446182985Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:42:19.446380 env[1822]: time="2025-11-01T00:42:19.446362229Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 00:42:19.446491 env[1822]: time="2025-11-01T00:42:19.446474489Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:42:19.454009 env[1822]: time="2025-11-01T00:42:19.453942781Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:42:19.454217 env[1822]: time="2025-11-01T00:42:19.454193852Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:42:19.454408 env[1822]: time="2025-11-01T00:42:19.454388739Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:42:19.454636 env[1822]: time="2025-11-01T00:42:19.454562605Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:42:19.454747 env[1822]: time="2025-11-01T00:42:19.454731844Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:42:19.454852 env[1822]: time="2025-11-01T00:42:19.454836819Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:42:19.454955 env[1822]: time="2025-11-01T00:42:19.454939267Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:42:19.455047 env[1822]: time="2025-11-01T00:42:19.455034150Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:42:19.455141 env[1822]: time="2025-11-01T00:42:19.455126365Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 00:42:19.455232 env[1822]: time="2025-11-01T00:42:19.455218448Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:42:19.455305 env[1822]: time="2025-11-01T00:42:19.455292091Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:42:19.455486 env[1822]: time="2025-11-01T00:42:19.455467788Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:42:19.455741 env[1822]: time="2025-11-01T00:42:19.455723214Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:42:19.455931 env[1822]: time="2025-11-01T00:42:19.455915075Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:42:19.456619 env[1822]: time="2025-11-01T00:42:19.456595490Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:42:19.456746 env[1822]: time="2025-11-01T00:42:19.456715690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:42:19.456817 env[1822]: time="2025-11-01T00:42:19.456805008Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:42:19.456924 env[1822]: time="2025-11-01T00:42:19.456910306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:42:19.457010 env[1822]: time="2025-11-01T00:42:19.456996301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:42:19.457081 env[1822]: time="2025-11-01T00:42:19.457068545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:42:19.457146 env[1822]: time="2025-11-01T00:42:19.457133677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:42:19.457230 env[1822]: time="2025-11-01T00:42:19.457215963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:42:19.457305 env[1822]: time="2025-11-01T00:42:19.457291080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:42:19.457391 env[1822]: time="2025-11-01T00:42:19.457378377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:42:19.457471 env[1822]: time="2025-11-01T00:42:19.457457217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:42:19.457568 env[1822]: time="2025-11-01T00:42:19.457553958Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:42:19.457789 env[1822]: time="2025-11-01T00:42:19.457772131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:42:19.462545 env[1822]: time="2025-11-01T00:42:19.462487249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:42:19.462762 env[1822]: time="2025-11-01T00:42:19.462743752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:42:19.462862 env[1822]: time="2025-11-01T00:42:19.462847161Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:42:19.462965 env[1822]: time="2025-11-01T00:42:19.462945308Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 00:42:19.465367 env[1822]: time="2025-11-01T00:42:19.465308244Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:42:19.465835 env[1822]: time="2025-11-01T00:42:19.465799071Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 00:42:19.466015 env[1822]: time="2025-11-01T00:42:19.465975300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:42:19.473450 env[1822]: time="2025-11-01T00:42:19.466516759Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:42:19.482774 env[1822]: time="2025-11-01T00:42:19.482639444Z" level=info msg="Connect containerd service" Nov 1 00:42:19.482913 env[1822]: time="2025-11-01T00:42:19.482836152Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:42:19.484009 env[1822]: time="2025-11-01T00:42:19.483969810Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:42:19.484220 env[1822]: time="2025-11-01T00:42:19.484180372Z" level=info msg="Start subscribing containerd event" Nov 1 00:42:19.484274 env[1822]: time="2025-11-01T00:42:19.484259655Z" level=info msg="Start recovering state" Nov 1 00:42:19.484395 env[1822]: time="2025-11-01T00:42:19.484379645Z" level=info msg="Start event monitor" Nov 1 00:42:19.484444 env[1822]: time="2025-11-01T00:42:19.484402131Z" level=info msg="Start snapshots syncer" Nov 1 00:42:19.484444 env[1822]: time="2025-11-01T00:42:19.484418290Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:42:19.484526 env[1822]: time="2025-11-01T00:42:19.484447007Z" level=info msg="Start streaming server" Nov 1 00:42:19.484949 env[1822]: time="2025-11-01T00:42:19.484923857Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:42:19.485064 env[1822]: time="2025-11-01T00:42:19.485005134Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:42:19.485268 systemd[1]: Started containerd.service. Nov 1 00:42:19.489609 systemd[1]: Created slice system-sshd.slice. Nov 1 00:42:19.492959 env[1822]: time="2025-11-01T00:42:19.490558746Z" level=info msg="containerd successfully booted in 0.466405s" Nov 1 00:42:19.508252 dbus-daemon[1794]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 1 00:42:19.508599 systemd[1]: Started systemd-hostnamed.service. Nov 1 00:42:19.511060 dbus-daemon[1794]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1850 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 1 00:42:19.515160 systemd[1]: Starting polkit.service... Nov 1 00:42:19.546661 polkitd[1922]: Started polkitd version 121 Nov 1 00:42:19.579884 polkitd[1922]: Loading rules from directory /etc/polkit-1/rules.d Nov 1 00:42:19.579976 polkitd[1922]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 1 00:42:19.585684 polkitd[1922]: Finished loading, compiling and executing 2 rules Nov 1 00:42:19.586270 dbus-daemon[1794]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 1 00:42:19.586493 systemd[1]: Started polkit.service. Nov 1 00:42:19.589160 polkitd[1922]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 1 00:42:19.612027 systemd-hostnamed[1850]: Hostname set to (transient) Nov 1 00:42:19.612155 systemd-resolved[1760]: System hostname changed to 'ip-172-31-19-28'. Nov 1 00:42:19.614818 systemd[1]: nvidia.service: Deactivated successfully. Nov 1 00:42:19.805041 coreos-metadata[1792]: Nov 01 00:42:19.800 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 1 00:42:19.811438 coreos-metadata[1792]: Nov 01 00:42:19.811 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Nov 1 00:42:19.813481 coreos-metadata[1792]: Nov 01 00:42:19.813 INFO Fetch successful Nov 1 00:42:19.813639 coreos-metadata[1792]: Nov 01 00:42:19.813 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 1 00:42:19.820230 coreos-metadata[1792]: Nov 01 00:42:19.820 INFO Fetch successful Nov 1 00:42:19.821911 unknown[1792]: wrote ssh authorized keys file for user: core Nov 1 00:42:19.838368 update-ssh-keys[1977]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:42:19.837729 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Nov 1 00:42:20.148668 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO Create new startup processor Nov 1 00:42:20.150847 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [LongRunningPluginsManager] registered plugins: {} Nov 1 00:42:20.155126 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO Initializing bookkeeping folders Nov 1 00:42:20.155300 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO removing the completed state files Nov 1 00:42:20.155397 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO Initializing bookkeeping folders for long running plugins Nov 1 00:42:20.155486 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Nov 1 00:42:20.156616 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO Initializing healthcheck folders for long running plugins Nov 1 00:42:20.156738 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO Initializing locations for inventory plugin Nov 1 00:42:20.156840 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO Initializing default location for custom inventory Nov 1 00:42:20.156937 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO Initializing default location for file inventory Nov 1 00:42:20.157681 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO Initializing default location for role inventory Nov 1 00:42:20.157801 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO Init the cloudwatchlogs publisher Nov 1 00:42:20.157890 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [instanceID=i-0b303a8204aa449e1] Successfully loaded platform independent plugin aws:softwareInventory Nov 1 00:42:20.157974 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [instanceID=i-0b303a8204aa449e1] Successfully loaded platform independent plugin aws:runPowerShellScript Nov 1 00:42:20.158054 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [instanceID=i-0b303a8204aa449e1] Successfully loaded platform independent plugin aws:updateSsmAgent Nov 1 00:42:20.158130 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [instanceID=i-0b303a8204aa449e1] Successfully loaded platform independent plugin aws:runDocument Nov 1 00:42:20.158202 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [instanceID=i-0b303a8204aa449e1] Successfully loaded platform independent plugin aws:configureDocker Nov 1 00:42:20.158301 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [instanceID=i-0b303a8204aa449e1] Successfully loaded platform independent plugin aws:runDockerAction Nov 1 00:42:20.159777 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [instanceID=i-0b303a8204aa449e1] Successfully loaded platform independent plugin aws:refreshAssociation Nov 1 00:42:20.159893 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [instanceID=i-0b303a8204aa449e1] Successfully loaded platform independent plugin aws:configurePackage Nov 1 00:42:20.159970 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [instanceID=i-0b303a8204aa449e1] Successfully loaded platform independent plugin aws:downloadContent Nov 1 00:42:20.160051 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [instanceID=i-0b303a8204aa449e1] Successfully loaded platform dependent plugin aws:runShellScript Nov 1 00:42:20.160141 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Nov 1 00:42:20.160218 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO OS: linux, Arch: amd64 Nov 1 00:42:20.162253 amazon-ssm-agent[1885]: datastore file /var/lib/amazon/ssm/i-0b303a8204aa449e1/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Nov 1 00:42:20.250688 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [MessageGatewayService] Starting session document processing engine... Nov 1 00:42:20.311466 tar[1811]: linux-amd64/README.md Nov 1 00:42:20.321565 systemd[1]: Finished prepare-helm.service. Nov 1 00:42:20.345349 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [MessageGatewayService] [EngineProcessor] Starting Nov 1 00:42:20.439707 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Nov 1 00:42:20.534577 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0b303a8204aa449e1, requestId: dd585484-7513-467c-8228-03e6cf66e1fa Nov 1 00:42:20.560240 locksmithd[1862]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:42:20.629602 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [MessagingDeliveryService] Starting document processing engine... Nov 1 00:42:20.724514 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [MessagingDeliveryService] [EngineProcessor] Starting Nov 1 00:42:20.819621 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Nov 1 00:42:20.915237 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [MessagingDeliveryService] Starting message polling Nov 1 00:42:21.010819 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [MessagingDeliveryService] Starting send replies to MDS Nov 1 00:42:21.106426 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [instanceID=i-0b303a8204aa449e1] Starting association polling Nov 1 00:42:21.202332 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Nov 1 00:42:21.298393 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [MessagingDeliveryService] [Association] Launching response handler Nov 1 00:42:21.394827 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Nov 1 00:42:21.491596 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Nov 1 00:42:21.588329 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Nov 1 00:42:21.685131 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [MessageGatewayService] listening reply. Nov 1 00:42:21.782206 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [HealthCheck] HealthCheck reporting agent health. Nov 1 00:42:21.821844 systemd[1]: Started kubelet.service. Nov 1 00:42:21.879548 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [OfflineService] Starting document processing engine... Nov 1 00:42:21.976957 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [OfflineService] [EngineProcessor] Starting Nov 1 00:42:22.076008 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [OfflineService] [EngineProcessor] Initial processing Nov 1 00:42:22.127090 sshd_keygen[1827]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:42:22.157052 systemd[1]: Finished sshd-keygen.service. Nov 1 00:42:22.159768 systemd[1]: Starting issuegen.service... Nov 1 00:42:22.162369 systemd[1]: Started sshd@0-172.31.19.28:22-147.75.109.163:57946.service. Nov 1 00:42:22.173355 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:42:22.173668 systemd[1]: Finished issuegen.service. Nov 1 00:42:22.173969 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [OfflineService] Starting message polling Nov 1 00:42:22.179381 systemd[1]: Starting systemd-user-sessions.service... Nov 1 00:42:22.191080 systemd[1]: Finished systemd-user-sessions.service. Nov 1 00:42:22.194467 systemd[1]: Started getty@tty1.service. Nov 1 00:42:22.198452 systemd[1]: Started serial-getty@ttyS0.service. Nov 1 00:42:22.200394 systemd[1]: Reached target getty.target. Nov 1 00:42:22.203708 systemd[1]: Reached target multi-user.target. Nov 1 00:42:22.208077 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 00:42:22.223759 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 00:42:22.224109 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 00:42:22.226675 systemd[1]: Startup finished in 7.056s (kernel) + 10.229s (userspace) = 17.286s. Nov 1 00:42:22.272122 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [OfflineService] Starting send replies to MDS Nov 1 00:42:22.370384 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [LongRunningPluginsManager] starting long running plugin manager Nov 1 00:42:22.375835 sshd[2028]: Accepted publickey for core from 147.75.109.163 port 57946 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:42:22.379938 sshd[2028]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:22.398836 systemd[1]: Created slice user-500.slice. Nov 1 00:42:22.400326 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 00:42:22.405677 systemd-logind[1803]: New session 1 of user core. Nov 1 00:42:22.415472 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 00:42:22.417295 systemd[1]: Starting user@500.service... Nov 1 00:42:22.422433 (systemd)[2040]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:22.469171 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Nov 1 00:42:22.539979 systemd[2040]: Queued start job for default target default.target. Nov 1 00:42:22.540408 systemd[2040]: Reached target paths.target. Nov 1 00:42:22.540438 systemd[2040]: Reached target sockets.target. Nov 1 00:42:22.540461 systemd[2040]: Reached target timers.target. Nov 1 00:42:22.540484 systemd[2040]: Reached target basic.target. Nov 1 00:42:22.540553 systemd[2040]: Reached target default.target. Nov 1 00:42:22.540594 systemd[2040]: Startup finished in 106ms. Nov 1 00:42:22.541121 systemd[1]: Started user@500.service. Nov 1 00:42:22.543082 systemd[1]: Started session-1.scope. Nov 1 00:42:22.567894 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Nov 1 00:42:22.666921 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [StartupProcessor] Executing startup processor tasks Nov 1 00:42:22.683428 systemd[1]: Started sshd@1-172.31.19.28:22-147.75.109.163:58068.service. Nov 1 00:42:22.765841 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Nov 1 00:42:22.851215 sshd[2051]: Accepted publickey for core from 147.75.109.163 port 58068 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:42:22.852555 sshd[2051]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:22.858709 systemd-logind[1803]: New session 2 of user core. Nov 1 00:42:22.860009 systemd[1]: Started session-2.scope. Nov 1 00:42:22.865071 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Nov 1 00:42:22.965574 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.8 Nov 1 00:42:22.993518 sshd[2051]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:22.996628 systemd-logind[1803]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:42:22.997799 systemd[1]: sshd@1-172.31.19.28:22-147.75.109.163:58068.service: Deactivated successfully. Nov 1 00:42:22.999293 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:42:23.000260 systemd-logind[1803]: Removed session 2. Nov 1 00:42:23.016943 systemd[1]: Started sshd@2-172.31.19.28:22-147.75.109.163:58084.service. Nov 1 00:42:23.053631 kubelet[2012]: E1101 00:42:23.053568 2012 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:42:23.055949 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:42:23.056430 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:42:23.065092 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0b303a8204aa449e1?role=subscribe&stream=input Nov 1 00:42:23.164876 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0b303a8204aa449e1?role=subscribe&stream=input Nov 1 00:42:23.179884 sshd[2058]: Accepted publickey for core from 147.75.109.163 port 58084 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:42:23.181438 sshd[2058]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:23.186733 systemd-logind[1803]: New session 3 of user core. Nov 1 00:42:23.187530 systemd[1]: Started session-3.scope. Nov 1 00:42:23.265034 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [MessageGatewayService] Starting receiving message from control channel Nov 1 00:42:23.310705 sshd[2058]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:23.316477 systemd[1]: sshd@2-172.31.19.28:22-147.75.109.163:58084.service: Deactivated successfully. Nov 1 00:42:23.317583 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:42:23.319309 systemd-logind[1803]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:42:23.324243 systemd-logind[1803]: Removed session 3. Nov 1 00:42:23.343298 systemd[1]: Started sshd@3-172.31.19.28:22-147.75.109.163:58096.service. Nov 1 00:42:23.365151 amazon-ssm-agent[1885]: 2025-11-01 00:42:20 INFO [MessageGatewayService] [EngineProcessor] Initial processing Nov 1 00:42:23.497749 sshd[2066]: Accepted publickey for core from 147.75.109.163 port 58096 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:42:23.499517 sshd[2066]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:23.505406 systemd[1]: Started session-4.scope. Nov 1 00:42:23.506413 systemd-logind[1803]: New session 4 of user core. Nov 1 00:42:23.632648 sshd[2066]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:23.636209 systemd[1]: sshd@3-172.31.19.28:22-147.75.109.163:58096.service: Deactivated successfully. Nov 1 00:42:23.637323 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:42:23.639207 systemd-logind[1803]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:42:23.640765 systemd-logind[1803]: Removed session 4. Nov 1 00:42:23.657265 systemd[1]: Started sshd@4-172.31.19.28:22-147.75.109.163:58108.service. Nov 1 00:42:23.815754 sshd[2073]: Accepted publickey for core from 147.75.109.163 port 58108 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:42:23.817198 sshd[2073]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:23.822893 systemd[1]: Started session-5.scope. Nov 1 00:42:23.823403 systemd-logind[1803]: New session 5 of user core. Nov 1 00:42:23.944780 sudo[2077]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:42:23.945431 sudo[2077]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:42:23.953573 dbus-daemon[1794]: \xd0-GJ\u0017V: received setenforce notice (enforcing=-1867564752) Nov 1 00:42:23.955621 sudo[2077]: pam_unix(sudo:session): session closed for user root Nov 1 00:42:23.980008 sshd[2073]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:23.983824 systemd[1]: sshd@4-172.31.19.28:22-147.75.109.163:58108.service: Deactivated successfully. Nov 1 00:42:23.984977 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:42:23.986889 systemd-logind[1803]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:42:23.988293 systemd-logind[1803]: Removed session 5. Nov 1 00:42:24.004968 systemd[1]: Started sshd@5-172.31.19.28:22-147.75.109.163:58122.service. Nov 1 00:42:24.162139 sshd[2081]: Accepted publickey for core from 147.75.109.163 port 58122 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:42:24.163865 sshd[2081]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:24.173033 systemd-logind[1803]: New session 6 of user core. Nov 1 00:42:24.173686 systemd[1]: Started session-6.scope. Nov 1 00:42:24.305476 sudo[2086]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:42:24.305716 sudo[2086]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:42:24.309993 sudo[2086]: pam_unix(sudo:session): session closed for user root Nov 1 00:42:24.315562 sudo[2085]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:42:24.315803 sudo[2085]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:42:24.326115 systemd[1]: Stopping audit-rules.service... Nov 1 00:42:24.326000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Nov 1 00:42:24.328749 kernel: kauditd_printk_skb: 174 callbacks suppressed Nov 1 00:42:24.328822 kernel: audit: type=1305 audit(1761957744.326:155): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Nov 1 00:42:24.328849 auditctl[2089]: No rules Nov 1 00:42:24.326000 audit[2089]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeed16d110 a2=420 a3=0 items=0 ppid=1 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:24.329280 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:42:24.329529 systemd[1]: Stopped audit-rules.service. Nov 1 00:42:24.332152 systemd[1]: Starting audit-rules.service... Nov 1 00:42:24.335462 kernel: audit: type=1300 audit(1761957744.326:155): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeed16d110 a2=420 a3=0 items=0 ppid=1 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:24.326000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Nov 1 00:42:24.344146 kernel: audit: type=1327 audit(1761957744.326:155): proctitle=2F7362696E2F617564697463746C002D44 Nov 1 00:42:24.344192 kernel: audit: type=1131 audit(1761957744.328:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:24.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:24.357648 augenrules[2107]: No rules Nov 1 00:42:24.366738 kernel: audit: type=1130 audit(1761957744.357:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:24.366836 kernel: audit: type=1106 audit(1761957744.358:158): pid=2085 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:42:24.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:24.358000 audit[2085]: USER_END pid=2085 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:42:24.360119 sudo[2085]: pam_unix(sudo:session): session closed for user root Nov 1 00:42:24.358601 systemd[1]: Finished audit-rules.service. Nov 1 00:42:24.371377 kernel: audit: type=1104 audit(1761957744.358:159): pid=2085 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:42:24.358000 audit[2085]: CRED_DISP pid=2085 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:42:24.382121 sshd[2081]: pam_unix(sshd:session): session closed for user core Nov 1 00:42:24.382000 audit[2081]: USER_END pid=2081 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:42:24.384880 systemd[1]: sshd@5-172.31.19.28:22-147.75.109.163:58122.service: Deactivated successfully. Nov 1 00:42:24.385661 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:42:24.387209 systemd-logind[1803]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:42:24.389366 kernel: audit: type=1106 audit(1761957744.382:160): pid=2081 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:42:24.388739 systemd-logind[1803]: Removed session 6. Nov 1 00:42:24.382000 audit[2081]: CRED_DISP pid=2081 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:42:24.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.19.28:22-147.75.109.163:58122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:24.395729 kernel: audit: type=1104 audit(1761957744.382:161): pid=2081 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:42:24.395773 kernel: audit: type=1131 audit(1761957744.382:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.19.28:22-147.75.109.163:58122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:24.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.19.28:22-147.75.109.163:58138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:24.407027 systemd[1]: Started sshd@6-172.31.19.28:22-147.75.109.163:58138.service. Nov 1 00:42:24.566000 audit[2114]: USER_ACCT pid=2114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:42:24.569058 sshd[2114]: Accepted publickey for core from 147.75.109.163 port 58138 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:42:24.567000 audit[2114]: CRED_ACQ pid=2114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:42:24.567000 audit[2114]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc3a0adce0 a2=3 a3=0 items=0 ppid=1 pid=2114 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:24.567000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:42:24.569530 sshd[2114]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:42:24.575416 systemd[1]: Started session-7.scope. Nov 1 00:42:24.575846 systemd-logind[1803]: New session 7 of user core. Nov 1 00:42:24.582000 audit[2114]: USER_START pid=2114 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:42:24.584000 audit[2117]: CRED_ACQ pid=2117 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:42:24.680000 audit[2118]: USER_ACCT pid=2118 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:42:24.681831 sudo[2118]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:42:24.682165 sudo[2118]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:42:24.680000 audit[2118]: CRED_REFR pid=2118 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:42:24.683000 audit[2118]: USER_START pid=2118 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:42:24.711403 systemd[1]: Starting docker.service... Nov 1 00:42:24.757110 env[2128]: time="2025-11-01T00:42:24.757067699Z" level=info msg="Starting up" Nov 1 00:42:24.759085 env[2128]: time="2025-11-01T00:42:24.759058409Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:42:24.759210 env[2128]: time="2025-11-01T00:42:24.759198675Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:42:24.759269 env[2128]: time="2025-11-01T00:42:24.759256768Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:42:24.759324 env[2128]: time="2025-11-01T00:42:24.759315953Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:42:24.761912 env[2128]: time="2025-11-01T00:42:24.761830581Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:42:24.761912 env[2128]: time="2025-11-01T00:42:24.761857949Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:42:24.761912 env[2128]: time="2025-11-01T00:42:24.761881082Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:42:24.761912 env[2128]: time="2025-11-01T00:42:24.761894471Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:42:24.770233 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2922923511-merged.mount: Deactivated successfully. Nov 1 00:42:25.097811 env[2128]: time="2025-11-01T00:42:25.097763438Z" level=warning msg="Your kernel does not support cgroup blkio weight" Nov 1 00:42:25.097811 env[2128]: time="2025-11-01T00:42:25.097798025Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Nov 1 00:42:25.098103 env[2128]: time="2025-11-01T00:42:25.098053224Z" level=info msg="Loading containers: start." Nov 1 00:42:25.226000 audit[2158]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2158 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.226000 audit[2158]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe7e865800 a2=0 a3=7ffe7e8657ec items=0 ppid=2128 pid=2158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.226000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Nov 1 00:42:25.228000 audit[2160]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2160 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.228000 audit[2160]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffdcc9a340 a2=0 a3=7fffdcc9a32c items=0 ppid=2128 pid=2160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.228000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Nov 1 00:42:25.230000 audit[2162]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2162 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.230000 audit[2162]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff8565b060 a2=0 a3=7fff8565b04c items=0 ppid=2128 pid=2162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.230000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Nov 1 00:42:25.232000 audit[2164]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2164 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.232000 audit[2164]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd996b8540 a2=0 a3=7ffd996b852c items=0 ppid=2128 pid=2164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.232000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Nov 1 00:42:25.236000 audit[2166]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2166 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.236000 audit[2166]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffeccfe92d0 a2=0 a3=7ffeccfe92bc items=0 ppid=2128 pid=2166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.236000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Nov 1 00:42:25.254000 audit[2171]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=2171 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.254000 audit[2171]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc2eb4dcd0 a2=0 a3=7ffc2eb4dcbc items=0 ppid=2128 pid=2171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.254000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Nov 1 00:42:25.263000 audit[2173]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2173 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.263000 audit[2173]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe681d18b0 a2=0 a3=7ffe681d189c items=0 ppid=2128 pid=2173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.263000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Nov 1 00:42:25.265000 audit[2175]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2175 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.265000 audit[2175]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffe5285470 a2=0 a3=7fffe528545c items=0 ppid=2128 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.265000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Nov 1 00:42:25.268000 audit[2177]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2177 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.268000 audit[2177]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffe075ea7e0 a2=0 a3=7ffe075ea7cc items=0 ppid=2128 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.268000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:42:25.278000 audit[2181]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=2181 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.278000 audit[2181]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe49247de0 a2=0 a3=7ffe49247dcc items=0 ppid=2128 pid=2181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.278000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:42:25.283000 audit[2182]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2182 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.283000 audit[2182]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe3b389250 a2=0 a3=7ffe3b38923c items=0 ppid=2128 pid=2182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.283000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:42:25.308368 kernel: Initializing XFRM netlink socket Nov 1 00:42:25.361442 env[2128]: time="2025-11-01T00:42:25.359926951Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 00:42:25.362266 (udev-worker)[2138]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:42:25.382000 audit[2190]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2190 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.382000 audit[2190]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffdbe6b0e10 a2=0 a3=7ffdbe6b0dfc items=0 ppid=2128 pid=2190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.382000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Nov 1 00:42:25.394000 audit[2193]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2193 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.394000 audit[2193]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff55664590 a2=0 a3=7fff5566457c items=0 ppid=2128 pid=2193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.394000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Nov 1 00:42:25.399000 audit[2196]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2196 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.399000 audit[2196]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffefab01b00 a2=0 a3=7ffefab01aec items=0 ppid=2128 pid=2196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.399000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Nov 1 00:42:25.401000 audit[2198]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2198 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.401000 audit[2198]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff31ebd010 a2=0 a3=7fff31ebcffc items=0 ppid=2128 pid=2198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.401000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Nov 1 00:42:25.404000 audit[2200]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2200 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.404000 audit[2200]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffec8efd0d0 a2=0 a3=7ffec8efd0bc items=0 ppid=2128 pid=2200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.404000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Nov 1 00:42:25.406000 audit[2202]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2202 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.406000 audit[2202]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffffccef2e0 a2=0 a3=7ffffccef2cc items=0 ppid=2128 pid=2202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.406000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Nov 1 00:42:25.408000 audit[2204]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2204 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.408000 audit[2204]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffc04c4ad30 a2=0 a3=7ffc04c4ad1c items=0 ppid=2128 pid=2204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.408000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Nov 1 00:42:25.426000 audit[2207]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2207 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.426000 audit[2207]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffd58e114e0 a2=0 a3=7ffd58e114cc items=0 ppid=2128 pid=2207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.426000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Nov 1 00:42:25.428000 audit[2209]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2209 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.428000 audit[2209]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffe031dae50 a2=0 a3=7ffe031dae3c items=0 ppid=2128 pid=2209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.428000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Nov 1 00:42:25.431000 audit[2211]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2211 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.431000 audit[2211]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd2607cc30 a2=0 a3=7ffd2607cc1c items=0 ppid=2128 pid=2211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.431000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Nov 1 00:42:25.434000 audit[2213]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2213 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.434000 audit[2213]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc7cbf84c0 a2=0 a3=7ffc7cbf84ac items=0 ppid=2128 pid=2213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.434000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Nov 1 00:42:25.436147 systemd-networkd[1497]: docker0: Link UP Nov 1 00:42:25.445000 audit[2217]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2217 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.445000 audit[2217]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffebf8dd4c0 a2=0 a3=7ffebf8dd4ac items=0 ppid=2128 pid=2217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.445000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:42:25.450000 audit[2218]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2218 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:25.450000 audit[2218]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdde56b0b0 a2=0 a3=7ffdde56b09c items=0 ppid=2128 pid=2218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:25.450000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 00:42:25.452415 env[2128]: time="2025-11-01T00:42:25.452378396Z" level=info msg="Loading containers: done." Nov 1 00:42:25.483331 env[2128]: time="2025-11-01T00:42:25.483277043Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:42:25.483547 env[2128]: time="2025-11-01T00:42:25.483507309Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 00:42:25.483641 env[2128]: time="2025-11-01T00:42:25.483619937Z" level=info msg="Daemon has completed initialization" Nov 1 00:42:25.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:25.504649 systemd[1]: Started docker.service. Nov 1 00:42:25.515927 env[2128]: time="2025-11-01T00:42:25.515870030Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:42:26.729606 env[1822]: time="2025-11-01T00:42:26.729560749Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 00:42:27.234045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount412486237.mount: Deactivated successfully. Nov 1 00:42:28.793474 env[1822]: time="2025-11-01T00:42:28.793416424Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:28.795968 env[1822]: time="2025-11-01T00:42:28.795918759Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:28.798206 env[1822]: time="2025-11-01T00:42:28.798161378Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:28.800291 env[1822]: time="2025-11-01T00:42:28.800253238Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:28.801092 env[1822]: time="2025-11-01T00:42:28.801055138Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 00:42:28.801683 env[1822]: time="2025-11-01T00:42:28.801662653Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 00:42:30.473909 env[1822]: time="2025-11-01T00:42:30.473820724Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:30.479292 env[1822]: time="2025-11-01T00:42:30.479243473Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:30.482956 env[1822]: time="2025-11-01T00:42:30.482911476Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:30.486389 env[1822]: time="2025-11-01T00:42:30.486329911Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:30.487403 env[1822]: time="2025-11-01T00:42:30.487361934Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 00:42:30.488054 env[1822]: time="2025-11-01T00:42:30.488024326Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 00:42:31.901293 env[1822]: time="2025-11-01T00:42:31.901242311Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.903864 env[1822]: time="2025-11-01T00:42:31.903818648Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.905986 env[1822]: time="2025-11-01T00:42:31.905944533Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.908596 env[1822]: time="2025-11-01T00:42:31.908549862Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:31.909205 env[1822]: time="2025-11-01T00:42:31.909172944Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 00:42:31.910762 env[1822]: time="2025-11-01T00:42:31.910413480Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 00:42:33.216534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1111090206.mount: Deactivated successfully. Nov 1 00:42:33.218038 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:42:33.228670 kernel: kauditd_printk_skb: 84 callbacks suppressed Nov 1 00:42:33.228811 kernel: audit: type=1130 audit(1761957753.216:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:33.228859 kernel: audit: type=1131 audit(1761957753.216:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:33.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:33.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:33.218260 systemd[1]: Stopped kubelet.service. Nov 1 00:42:33.220388 systemd[1]: Starting kubelet.service... Nov 1 00:42:34.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:34.467005 systemd[1]: Started kubelet.service. Nov 1 00:42:34.473740 kernel: audit: type=1130 audit(1761957754.466:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:34.572378 kubelet[2259]: E1101 00:42:34.572313 2259 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:42:34.576833 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:42:34.577050 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:42:34.582462 kernel: audit: type=1131 audit(1761957754.576:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:42:34.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:42:34.849271 env[1822]: time="2025-11-01T00:42:34.849209508Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:34.856786 env[1822]: time="2025-11-01T00:42:34.856731086Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:34.902213 env[1822]: time="2025-11-01T00:42:34.902157133Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:34.921395 env[1822]: time="2025-11-01T00:42:34.921327522Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:34.922066 env[1822]: time="2025-11-01T00:42:34.922016207Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 00:42:34.923042 env[1822]: time="2025-11-01T00:42:34.923001622Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 00:42:35.435765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount843460983.mount: Deactivated successfully. Nov 1 00:42:36.617234 env[1822]: time="2025-11-01T00:42:36.617168449Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:36.620469 env[1822]: time="2025-11-01T00:42:36.620419080Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:36.623710 env[1822]: time="2025-11-01T00:42:36.623652248Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:36.626210 env[1822]: time="2025-11-01T00:42:36.626154264Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:36.627092 env[1822]: time="2025-11-01T00:42:36.627057483Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 00:42:36.627941 env[1822]: time="2025-11-01T00:42:36.627913285Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:42:37.033073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1194907255.mount: Deactivated successfully. Nov 1 00:42:37.041537 env[1822]: time="2025-11-01T00:42:37.041484533Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:37.044037 env[1822]: time="2025-11-01T00:42:37.043990844Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:37.045721 env[1822]: time="2025-11-01T00:42:37.045675704Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:37.047979 env[1822]: time="2025-11-01T00:42:37.047946514Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:37.048793 env[1822]: time="2025-11-01T00:42:37.048751337Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 00:42:37.049456 env[1822]: time="2025-11-01T00:42:37.049414511Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 00:42:37.521793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3419941815.mount: Deactivated successfully. Nov 1 00:42:40.148311 env[1822]: time="2025-11-01T00:42:40.148249732Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:40.152235 env[1822]: time="2025-11-01T00:42:40.152185330Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:40.156269 env[1822]: time="2025-11-01T00:42:40.156224460Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:40.165906 env[1822]: time="2025-11-01T00:42:40.165846117Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:40.167220 env[1822]: time="2025-11-01T00:42:40.167175079Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 00:42:42.733543 systemd[1]: Stopped kubelet.service. Nov 1 00:42:42.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:42.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:42.739879 systemd[1]: Starting kubelet.service... Nov 1 00:42:42.742772 kernel: audit: type=1130 audit(1761957762.732:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:42.742883 kernel: audit: type=1131 audit(1761957762.735:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:42.779959 systemd[1]: Reloading. Nov 1 00:42:42.913508 /usr/lib/systemd/system-generators/torcx-generator[2313]: time="2025-11-01T00:42:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:42:42.916633 /usr/lib/systemd/system-generators/torcx-generator[2313]: time="2025-11-01T00:42:42Z" level=info msg="torcx already run" Nov 1 00:42:43.056299 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:42:43.056325 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:42:43.085045 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:42:43.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:42:43.215427 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 00:42:43.215556 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 00:42:43.215950 systemd[1]: Stopped kubelet.service. Nov 1 00:42:43.218319 systemd[1]: Starting kubelet.service... Nov 1 00:42:43.221367 kernel: audit: type=1130 audit(1761957763.214:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 00:42:43.579150 kernel: audit: type=1130 audit(1761957763.570:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:43.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:43.571526 systemd[1]: Started kubelet.service. Nov 1 00:42:43.647358 kubelet[2382]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:42:43.647737 kubelet[2382]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:42:43.647781 kubelet[2382]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:42:43.647932 kubelet[2382]: I1101 00:42:43.647905 2382 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:42:44.099050 kubelet[2382]: I1101 00:42:44.099005 2382 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:42:44.099050 kubelet[2382]: I1101 00:42:44.099042 2382 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:42:44.099448 kubelet[2382]: I1101 00:42:44.099426 2382 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:42:44.168092 kubelet[2382]: I1101 00:42:44.168066 2382 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:42:44.168270 kubelet[2382]: E1101 00:42:44.168240 2382 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.19.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.28:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:42:44.183366 kubelet[2382]: E1101 00:42:44.183311 2382 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:42:44.183661 kubelet[2382]: I1101 00:42:44.183624 2382 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:42:44.187054 kubelet[2382]: I1101 00:42:44.187024 2382 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:42:44.187588 kubelet[2382]: I1101 00:42:44.187547 2382 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:42:44.187799 kubelet[2382]: I1101 00:42:44.187584 2382 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-28","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:42:44.187964 kubelet[2382]: I1101 00:42:44.187810 2382 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:42:44.187964 kubelet[2382]: I1101 00:42:44.187826 2382 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:42:44.188059 kubelet[2382]: I1101 00:42:44.187971 2382 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:42:44.193939 kubelet[2382]: I1101 00:42:44.193890 2382 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:42:44.193939 kubelet[2382]: I1101 00:42:44.193939 2382 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:42:44.194229 kubelet[2382]: I1101 00:42:44.193966 2382 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:42:44.194229 kubelet[2382]: I1101 00:42:44.193978 2382 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:42:44.209842 kubelet[2382]: I1101 00:42:44.209818 2382 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:42:44.210863 kubelet[2382]: I1101 00:42:44.210840 2382 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:42:44.211254 kubelet[2382]: W1101 00:42:44.211242 2382 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:42:44.222648 kubelet[2382]: W1101 00:42:44.222410 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.28:6443: connect: connection refused Nov 1 00:42:44.222648 kubelet[2382]: E1101 00:42:44.222648 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.28:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:42:44.222838 kubelet[2382]: W1101 00:42:44.222736 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-28&limit=500&resourceVersion=0": dial tcp 172.31.19.28:6443: connect: connection refused Nov 1 00:42:44.222838 kubelet[2382]: E1101 00:42:44.222763 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-28&limit=500&resourceVersion=0\": dial tcp 172.31.19.28:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:42:44.223922 kubelet[2382]: I1101 00:42:44.223867 2382 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:42:44.223922 kubelet[2382]: I1101 00:42:44.223922 2382 server.go:1287] "Started kubelet" Nov 1 00:42:44.225157 kubelet[2382]: I1101 00:42:44.225119 2382 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:42:44.226000 kubelet[2382]: I1101 00:42:44.225968 2382 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:42:44.227000 audit[2382]: AVC avc: denied { mac_admin } for pid=2382 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:42:44.229750 kubelet[2382]: I1101 00:42:44.229689 2382 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:42:44.230095 kubelet[2382]: I1101 00:42:44.230082 2382 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:42:44.232475 kubelet[2382]: I1101 00:42:44.232436 2382 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Nov 1 00:42:44.227000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:42:44.233538 kubelet[2382]: I1101 00:42:44.233513 2382 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Nov 1 00:42:44.233735 kubelet[2382]: I1101 00:42:44.233723 2382 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:42:44.236258 kubelet[2382]: E1101 00:42:44.234283 2382 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.28:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.28:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-28.1873bb4055ce8074 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-28,UID:ip-172-31-19-28,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-28,},FirstTimestamp:2025-11-01 00:42:44.22389362 +0000 UTC m=+0.639627660,LastTimestamp:2025-11-01 00:42:44.22389362 +0000 UTC m=+0.639627660,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-28,}" Nov 1 00:42:44.237700 kernel: audit: type=1400 audit(1761957764.227:205): avc: denied { mac_admin } for pid=2382 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:42:44.237817 kernel: audit: type=1401 audit(1761957764.227:205): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:42:44.237852 kernel: audit: type=1300 audit(1761957764.227:205): arch=c000003e syscall=188 success=no exit=-22 a0=c0006fd950 a1=c000aa9e78 a2=c0006fd920 a3=25 items=0 ppid=1 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:44.227000 audit[2382]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0006fd950 a1=c000aa9e78 a2=c0006fd920 a3=25 items=0 ppid=1 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:44.239445 kubelet[2382]: E1101 00:42:44.239422 2382 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:42:44.239791 kubelet[2382]: I1101 00:42:44.239775 2382 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:42:44.244354 kubelet[2382]: I1101 00:42:44.244313 2382 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:42:44.244634 kubelet[2382]: I1101 00:42:44.244623 2382 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:42:44.244747 kubelet[2382]: I1101 00:42:44.244740 2382 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:42:44.245682 kubelet[2382]: I1101 00:42:44.245659 2382 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:42:44.246309 kubelet[2382]: W1101 00:42:44.246267 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.28:6443: connect: connection refused Nov 1 00:42:44.246471 kubelet[2382]: E1101 00:42:44.246451 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.28:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:42:44.247724 kubelet[2382]: I1101 00:42:44.247709 2382 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:42:44.247827 kubelet[2382]: I1101 00:42:44.247819 2382 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:42:44.249046 kubelet[2382]: E1101 00:42:44.249027 2382 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-19-28\" not found" Nov 1 00:42:44.227000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:42:44.232000 audit[2382]: AVC avc: denied { mac_admin } for pid=2382 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:42:44.267567 kubelet[2382]: E1101 00:42:44.267490 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-28?timeout=10s\": dial tcp 172.31.19.28:6443: connect: connection refused" interval="200ms" Nov 1 00:42:44.268804 kernel: audit: type=1327 audit(1761957764.227:205): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:42:44.268914 kernel: audit: type=1400 audit(1761957764.232:206): avc: denied { mac_admin } for pid=2382 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:42:44.269742 kernel: audit: type=1401 audit(1761957764.232:206): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:42:44.232000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:42:44.232000 audit[2382]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0006ff3a0 a1=c000aa9e90 a2=c0006fd9e0 a3=25 items=0 ppid=1 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:44.232000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:42:44.273000 audit[2394]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2394 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:44.273000 audit[2394]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd80002220 a2=0 a3=7ffd8000220c items=0 ppid=2382 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:44.273000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Nov 1 00:42:44.274000 audit[2395]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2395 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:44.274000 audit[2395]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd1a860570 a2=0 a3=7ffd1a86055c items=0 ppid=2382 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:44.274000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Nov 1 00:42:44.277000 audit[2397]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2397 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:44.277000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffef68ff8f0 a2=0 a3=7ffef68ff8dc items=0 ppid=2382 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:44.277000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:42:44.280000 audit[2399]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2399 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:44.280000 audit[2399]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffc5aab020 a2=0 a3=7fffc5aab00c items=0 ppid=2382 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:44.280000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:42:44.304000 audit[2403]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2403 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:44.304000 audit[2403]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe6c877620 a2=0 a3=7ffe6c87760c items=0 ppid=2382 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:44.304000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Nov 1 00:42:44.310369 kubelet[2382]: I1101 00:42:44.310295 2382 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:42:44.312000 audit[2407]: NETFILTER_CFG table=mangle:31 family=2 entries=1 op=nft_register_chain pid=2407 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:44.312000 audit[2407]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed5ed9c50 a2=0 a3=7ffed5ed9c3c items=0 ppid=2382 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:44.312000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Nov 1 00:42:44.316609 kubelet[2382]: I1101 00:42:44.316581 2382 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:42:44.316609 kubelet[2382]: I1101 00:42:44.316606 2382 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:42:44.316609 kubelet[2382]: I1101 00:42:44.316626 2382 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:42:44.315000 audit[2406]: NETFILTER_CFG table=mangle:32 family=10 entries=2 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:42:44.315000 audit[2406]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffedeb00e30 a2=0 a3=7ffedeb00e1c items=0 ppid=2382 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:44.315000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Nov 1 00:42:44.317501 kubelet[2382]: I1101 00:42:44.317442 2382 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:42:44.317501 kubelet[2382]: I1101 00:42:44.317493 2382 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:42:44.317648 kubelet[2382]: I1101 00:42:44.317517 2382 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:42:44.317648 kubelet[2382]: I1101 00:42:44.317542 2382 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:42:44.317648 kubelet[2382]: E1101 00:42:44.317600 2382 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:42:44.319199 kubelet[2382]: W1101 00:42:44.319144 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.28:6443: connect: connection refused Nov 1 00:42:44.319297 kubelet[2382]: E1101 00:42:44.319219 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.28:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:42:44.318000 audit[2409]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=2409 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:44.318000 audit[2409]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffefe690090 a2=0 a3=7ffefe69007c items=0 ppid=2382 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:44.318000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Nov 1 00:42:44.318000 audit[2411]: NETFILTER_CFG table=mangle:34 family=10 entries=1 op=nft_register_chain pid=2411 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:42:44.318000 audit[2411]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffedd084a00 a2=0 a3=10e3 items=0 ppid=2382 pid=2411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:44.318000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Nov 1 00:42:44.320000 audit[2412]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_chain pid=2412 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:44.320000 audit[2412]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8b9f9190 a2=0 a3=7ffe8b9f917c items=0 ppid=2382 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:44.320000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Nov 1 00:42:44.320000 audit[2413]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=2413 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:42:44.320000 audit[2413]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffc491a3d70 a2=0 a3=7ffc491a3d5c items=0 ppid=2382 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:44.320000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Nov 1 00:42:44.322000 audit[2414]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2414 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:42:44.322000 audit[2414]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffef24b7180 a2=0 a3=7ffef24b716c items=0 ppid=2382 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:44.322000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Nov 1 00:42:44.324517 kubelet[2382]: I1101 00:42:44.324478 2382 policy_none.go:49] "None policy: Start" Nov 1 00:42:44.324517 kubelet[2382]: I1101 00:42:44.324515 2382 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:42:44.324657 kubelet[2382]: I1101 00:42:44.324528 2382 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:42:44.333433 kubelet[2382]: I1101 00:42:44.333398 2382 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:42:44.332000 audit[2382]: AVC avc: denied { mac_admin } for pid=2382 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:42:44.332000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:42:44.332000 audit[2382]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009b99b0 a1=c000cff860 a2=c0009b9980 a3=25 items=0 ppid=1 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:44.332000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:42:44.334399 kubelet[2382]: I1101 00:42:44.334379 2382 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Nov 1 00:42:44.335094 kubelet[2382]: I1101 00:42:44.335043 2382 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:42:44.335094 kubelet[2382]: I1101 00:42:44.335070 2382 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:42:44.335418 kubelet[2382]: I1101 00:42:44.335399 2382 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:42:44.338901 kubelet[2382]: E1101 00:42:44.338880 2382 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:42:44.339106 kubelet[2382]: E1101 00:42:44.339095 2382 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-28\" not found" Nov 1 00:42:44.429572 kubelet[2382]: E1101 00:42:44.424603 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-28\" not found" node="ip-172-31-19-28" Nov 1 00:42:44.431924 kubelet[2382]: E1101 00:42:44.431897 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-28\" not found" node="ip-172-31-19-28" Nov 1 00:42:44.435242 kubelet[2382]: E1101 00:42:44.435215 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-28\" not found" node="ip-172-31-19-28" Nov 1 00:42:44.437555 kubelet[2382]: I1101 00:42:44.437529 2382 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-28" Nov 1 00:42:44.437930 kubelet[2382]: E1101 00:42:44.437901 2382 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.28:6443/api/v1/nodes\": dial tcp 172.31.19.28:6443: connect: connection refused" node="ip-172-31-19-28" Nov 1 00:42:44.445348 kubelet[2382]: I1101 00:42:44.445294 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5343bca0efed0d80ffb692febdf8386d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-28\" (UID: \"5343bca0efed0d80ffb692febdf8386d\") " pod="kube-system/kube-apiserver-ip-172-31-19-28" Nov 1 00:42:44.445587 kubelet[2382]: I1101 00:42:44.445559 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/415079f7411546a4eb301225690dedad-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-28\" (UID: \"415079f7411546a4eb301225690dedad\") " pod="kube-system/kube-controller-manager-ip-172-31-19-28" Nov 1 00:42:44.445690 kubelet[2382]: I1101 00:42:44.445604 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/415079f7411546a4eb301225690dedad-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-28\" (UID: \"415079f7411546a4eb301225690dedad\") " pod="kube-system/kube-controller-manager-ip-172-31-19-28" Nov 1 00:42:44.445690 kubelet[2382]: I1101 00:42:44.445632 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e383cb97e6089233c942856baa17b9c-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-28\" (UID: \"5e383cb97e6089233c942856baa17b9c\") " pod="kube-system/kube-scheduler-ip-172-31-19-28" Nov 1 00:42:44.445690 kubelet[2382]: I1101 00:42:44.445655 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5343bca0efed0d80ffb692febdf8386d-ca-certs\") pod \"kube-apiserver-ip-172-31-19-28\" (UID: \"5343bca0efed0d80ffb692febdf8386d\") " pod="kube-system/kube-apiserver-ip-172-31-19-28" Nov 1 00:42:44.445690 kubelet[2382]: I1101 00:42:44.445679 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5343bca0efed0d80ffb692febdf8386d-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-28\" (UID: \"5343bca0efed0d80ffb692febdf8386d\") " pod="kube-system/kube-apiserver-ip-172-31-19-28" Nov 1 00:42:44.445867 kubelet[2382]: I1101 00:42:44.445702 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/415079f7411546a4eb301225690dedad-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-28\" (UID: \"415079f7411546a4eb301225690dedad\") " pod="kube-system/kube-controller-manager-ip-172-31-19-28" Nov 1 00:42:44.445867 kubelet[2382]: I1101 00:42:44.445725 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/415079f7411546a4eb301225690dedad-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-28\" (UID: \"415079f7411546a4eb301225690dedad\") " pod="kube-system/kube-controller-manager-ip-172-31-19-28" Nov 1 00:42:44.445867 kubelet[2382]: I1101 00:42:44.445752 2382 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/415079f7411546a4eb301225690dedad-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-28\" (UID: \"415079f7411546a4eb301225690dedad\") " pod="kube-system/kube-controller-manager-ip-172-31-19-28" Nov 1 00:42:44.469105 kubelet[2382]: E1101 00:42:44.469042 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-28?timeout=10s\": dial tcp 172.31.19.28:6443: connect: connection refused" interval="400ms" Nov 1 00:42:44.640836 kubelet[2382]: I1101 00:42:44.640808 2382 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-28" Nov 1 00:42:44.641445 kubelet[2382]: E1101 00:42:44.641399 2382 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.28:6443/api/v1/nodes\": dial tcp 172.31.19.28:6443: connect: connection refused" node="ip-172-31-19-28" Nov 1 00:42:44.734708 env[1822]: time="2025-11-01T00:42:44.734205682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-28,Uid:415079f7411546a4eb301225690dedad,Namespace:kube-system,Attempt:0,}" Nov 1 00:42:44.734708 env[1822]: time="2025-11-01T00:42:44.734268502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-28,Uid:5343bca0efed0d80ffb692febdf8386d,Namespace:kube-system,Attempt:0,}" Nov 1 00:42:44.737962 env[1822]: time="2025-11-01T00:42:44.737910416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-28,Uid:5e383cb97e6089233c942856baa17b9c,Namespace:kube-system,Attempt:0,}" Nov 1 00:42:44.870125 kubelet[2382]: E1101 00:42:44.870080 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-28?timeout=10s\": dial tcp 172.31.19.28:6443: connect: connection refused" interval="800ms" Nov 1 00:42:45.044659 kubelet[2382]: I1101 00:42:45.044205 2382 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-28" Nov 1 00:42:45.044801 kubelet[2382]: E1101 00:42:45.044647 2382 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.28:6443/api/v1/nodes\": dial tcp 172.31.19.28:6443: connect: connection refused" node="ip-172-31-19-28" Nov 1 00:42:45.185552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1775690194.mount: Deactivated successfully. Nov 1 00:42:45.206589 env[1822]: time="2025-11-01T00:42:45.206352140Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:45.214100 env[1822]: time="2025-11-01T00:42:45.213740537Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:45.218977 env[1822]: time="2025-11-01T00:42:45.218927903Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:45.221441 env[1822]: time="2025-11-01T00:42:45.221384104Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:45.223695 env[1822]: time="2025-11-01T00:42:45.223647714Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:45.228295 env[1822]: time="2025-11-01T00:42:45.228248278Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:45.231310 env[1822]: time="2025-11-01T00:42:45.231261500Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:45.233251 env[1822]: time="2025-11-01T00:42:45.233200151Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:45.235633 env[1822]: time="2025-11-01T00:42:45.235588827Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:45.238616 env[1822]: time="2025-11-01T00:42:45.238382891Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:45.243150 env[1822]: time="2025-11-01T00:42:45.243104357Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:45.249776 env[1822]: time="2025-11-01T00:42:45.249727855Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:42:45.296597 env[1822]: time="2025-11-01T00:42:45.295624264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:42:45.296909 env[1822]: time="2025-11-01T00:42:45.296871366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:42:45.297043 env[1822]: time="2025-11-01T00:42:45.297018251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:42:45.299748 env[1822]: time="2025-11-01T00:42:45.299675276Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2206219efcb422674ceb454b278a7b90d50add3c05ede8f7b383da6a1bc9da12 pid=2428 runtime=io.containerd.runc.v2 Nov 1 00:42:45.303257 env[1822]: time="2025-11-01T00:42:45.302507214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:42:45.303257 env[1822]: time="2025-11-01T00:42:45.302594566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:42:45.303257 env[1822]: time="2025-11-01T00:42:45.302625062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:42:45.303257 env[1822]: time="2025-11-01T00:42:45.302843119Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/248eab3681d9792b93a9a1ba24eb41b1e1259ccb09eb2e63fcf5cf81515bae09 pid=2439 runtime=io.containerd.runc.v2 Nov 1 00:42:45.303556 kubelet[2382]: W1101 00:42:45.303110 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-28&limit=500&resourceVersion=0": dial tcp 172.31.19.28:6443: connect: connection refused Nov 1 00:42:45.303556 kubelet[2382]: E1101 00:42:45.303204 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-28&limit=500&resourceVersion=0\": dial tcp 172.31.19.28:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:42:45.343954 env[1822]: time="2025-11-01T00:42:45.343870390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:42:45.344237 env[1822]: time="2025-11-01T00:42:45.344174792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:42:45.344429 env[1822]: time="2025-11-01T00:42:45.344401417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:42:45.344754 env[1822]: time="2025-11-01T00:42:45.344720148Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/21f6501d782afded9ec5b95925aac37fd2c21af3296e97891250b9ad6fd5e25a pid=2476 runtime=io.containerd.runc.v2 Nov 1 00:42:45.374801 kubelet[2382]: W1101 00:42:45.374671 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.28:6443: connect: connection refused Nov 1 00:42:45.374801 kubelet[2382]: E1101 00:42:45.374750 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.28:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.28:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:42:45.457414 env[1822]: time="2025-11-01T00:42:45.457368042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-28,Uid:5e383cb97e6089233c942856baa17b9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2206219efcb422674ceb454b278a7b90d50add3c05ede8f7b383da6a1bc9da12\"" Nov 1 00:42:45.461749 env[1822]: time="2025-11-01T00:42:45.461706598Z" level=info msg="CreateContainer within sandbox \"2206219efcb422674ceb454b278a7b90d50add3c05ede8f7b383da6a1bc9da12\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:42:45.476882 env[1822]: time="2025-11-01T00:42:45.476836311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-28,Uid:415079f7411546a4eb301225690dedad,Namespace:kube-system,Attempt:0,} returns sandbox id \"21f6501d782afded9ec5b95925aac37fd2c21af3296e97891250b9ad6fd5e25a\"" Nov 1 00:42:45.478759 env[1822]: time="2025-11-01T00:42:45.478700382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-28,Uid:5343bca0efed0d80ffb692febdf8386d,Namespace:kube-system,Attempt:0,} returns sandbox id \"248eab3681d9792b93a9a1ba24eb41b1e1259ccb09eb2e63fcf5cf81515bae09\"" Nov 1 00:42:45.484600 env[1822]: time="2025-11-01T00:42:45.484549964Z" level=info msg="CreateContainer within sandbox \"21f6501d782afded9ec5b95925aac37fd2c21af3296e97891250b9ad6fd5e25a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:42:45.486183 env[1822]: time="2025-11-01T00:42:45.486140776Z" level=info msg="CreateContainer within sandbox \"248eab3681d9792b93a9a1ba24eb41b1e1259ccb09eb2e63fcf5cf81515bae09\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:42:45.515741 env[1822]: time="2025-11-01T00:42:45.515564146Z" level=info msg="CreateContainer within sandbox \"2206219efcb422674ceb454b278a7b90d50add3c05ede8f7b383da6a1bc9da12\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"52d27442d56927ece45b19987119e768a5187f631d664222f598e1ce9864d503\"" Nov 1 00:42:45.518026 env[1822]: time="2025-11-01T00:42:45.517638729Z" level=info msg="StartContainer for \"52d27442d56927ece45b19987119e768a5187f631d664222f598e1ce9864d503\"" Nov 1 00:42:45.524563 env[1822]: time="2025-11-01T00:42:45.524504294Z" level=info msg="CreateContainer within sandbox \"21f6501d782afded9ec5b95925aac37fd2c21af3296e97891250b9ad6fd5e25a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"10556918de567080a9089ee0c26ef85a30dedd7578caea1c48f2585b1b05fcde\"" Nov 1 00:42:45.525231 env[1822]: time="2025-11-01T00:42:45.525197412Z" level=info msg="StartContainer for \"10556918de567080a9089ee0c26ef85a30dedd7578caea1c48f2585b1b05fcde\"" Nov 1 00:42:45.548379 env[1822]: time="2025-11-01T00:42:45.548315379Z" level=info msg="CreateContainer within sandbox \"248eab3681d9792b93a9a1ba24eb41b1e1259ccb09eb2e63fcf5cf81515bae09\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"634b9cd19222ddf08c9fcafcb5d0f6c0b710a68f84782ab3c46095d4b26bd672\"" Nov 1 00:42:45.549890 env[1822]: time="2025-11-01T00:42:45.549851843Z" level=info msg="StartContainer for \"634b9cd19222ddf08c9fcafcb5d0f6c0b710a68f84782ab3c46095d4b26bd672\"" Nov 1 00:42:45.566966 kubelet[2382]: W1101 00:42:45.566884 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.28:6443: connect: connection refused Nov 1 00:42:45.567129 kubelet[2382]: E1101 00:42:45.566996 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.28:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:42:45.646292 env[1822]: time="2025-11-01T00:42:45.646245218Z" level=info msg="StartContainer for \"52d27442d56927ece45b19987119e768a5187f631d664222f598e1ce9864d503\" returns successfully" Nov 1 00:42:45.674783 kubelet[2382]: E1101 00:42:45.674731 2382 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-28?timeout=10s\": dial tcp 172.31.19.28:6443: connect: connection refused" interval="1.6s" Nov 1 00:42:45.717509 env[1822]: time="2025-11-01T00:42:45.717453695Z" level=info msg="StartContainer for \"10556918de567080a9089ee0c26ef85a30dedd7578caea1c48f2585b1b05fcde\" returns successfully" Nov 1 00:42:45.734986 kubelet[2382]: W1101 00:42:45.734925 2382 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.28:6443: connect: connection refused Nov 1 00:42:45.735148 kubelet[2382]: E1101 00:42:45.735002 2382 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.28:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:42:45.735206 env[1822]: time="2025-11-01T00:42:45.735159941Z" level=info msg="StartContainer for \"634b9cd19222ddf08c9fcafcb5d0f6c0b710a68f84782ab3c46095d4b26bd672\" returns successfully" Nov 1 00:42:45.847054 kubelet[2382]: I1101 00:42:45.846764 2382 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-28" Nov 1 00:42:45.847197 kubelet[2382]: E1101 00:42:45.847119 2382 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.19.28:6443/api/v1/nodes\": dial tcp 172.31.19.28:6443: connect: connection refused" node="ip-172-31-19-28" Nov 1 00:42:46.328824 kubelet[2382]: E1101 00:42:46.328570 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-28\" not found" node="ip-172-31-19-28" Nov 1 00:42:46.334935 kubelet[2382]: E1101 00:42:46.334721 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-28\" not found" node="ip-172-31-19-28" Nov 1 00:42:46.338740 kubelet[2382]: E1101 00:42:46.338713 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-28\" not found" node="ip-172-31-19-28" Nov 1 00:42:46.363382 kubelet[2382]: E1101 00:42:46.363323 2382 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.19.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.28:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:42:47.339851 kubelet[2382]: E1101 00:42:47.339825 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-28\" not found" node="ip-172-31-19-28" Nov 1 00:42:47.340917 kubelet[2382]: E1101 00:42:47.340896 2382 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-19-28\" not found" node="ip-172-31-19-28" Nov 1 00:42:47.449078 kubelet[2382]: I1101 00:42:47.449051 2382 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-28" Nov 1 00:42:47.589573 amazon-ssm-agent[1885]: 2025-11-01 00:42:47 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Nov 1 00:42:48.680195 kubelet[2382]: E1101 00:42:48.680135 2382 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-28\" not found" node="ip-172-31-19-28" Nov 1 00:42:48.801925 kubelet[2382]: I1101 00:42:48.801884 2382 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-28" Nov 1 00:42:48.850046 kubelet[2382]: I1101 00:42:48.849987 2382 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-28" Nov 1 00:42:48.857908 kubelet[2382]: E1101 00:42:48.857867 2382 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-19-28\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-19-28" Nov 1 00:42:48.857908 kubelet[2382]: I1101 00:42:48.857904 2382 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-28" Nov 1 00:42:48.860226 kubelet[2382]: E1101 00:42:48.860192 2382 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-19-28\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-19-28" Nov 1 00:42:48.860468 kubelet[2382]: I1101 00:42:48.860451 2382 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-28" Nov 1 00:42:48.863246 kubelet[2382]: E1101 00:42:48.863211 2382 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-28\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-19-28" Nov 1 00:42:49.206516 kubelet[2382]: I1101 00:42:49.206136 2382 apiserver.go:52] "Watching apiserver" Nov 1 00:42:49.245519 kubelet[2382]: I1101 00:42:49.245435 2382 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:42:49.644971 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 1 00:42:49.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:49.648573 kernel: kauditd_printk_skb: 42 callbacks suppressed Nov 1 00:42:49.648692 kernel: audit: type=1131 audit(1761957769.644:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:51.060886 systemd[1]: Reloading. Nov 1 00:42:51.144004 /usr/lib/systemd/system-generators/torcx-generator[2677]: time="2025-11-01T00:42:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:42:51.144045 /usr/lib/systemd/system-generators/torcx-generator[2677]: time="2025-11-01T00:42:51Z" level=info msg="torcx already run" Nov 1 00:42:51.275492 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:42:51.275517 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:42:51.296683 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:42:51.412431 systemd[1]: Stopping kubelet.service... Nov 1 00:42:51.435323 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:42:51.435731 systemd[1]: Stopped kubelet.service. Nov 1 00:42:51.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:51.444427 kernel: audit: type=1131 audit(1761957771.434:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:51.445254 systemd[1]: Starting kubelet.service... Nov 1 00:42:52.929835 systemd[1]: Started kubelet.service. Nov 1 00:42:52.940883 kernel: audit: type=1130 audit(1761957772.928:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:52.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:42:53.050324 kubelet[2748]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:42:53.050324 kubelet[2748]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:42:53.050324 kubelet[2748]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:42:53.051163 kubelet[2748]: I1101 00:42:53.050597 2748 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:42:53.095841 kubelet[2748]: I1101 00:42:53.095788 2748 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:42:53.095841 kubelet[2748]: I1101 00:42:53.095828 2748 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:42:53.096266 kubelet[2748]: I1101 00:42:53.096242 2748 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:42:53.106734 kubelet[2748]: I1101 00:42:53.106282 2748 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 00:42:53.115765 kubelet[2748]: I1101 00:42:53.115724 2748 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:42:53.174872 kubelet[2748]: E1101 00:42:53.174833 2748 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:42:53.175073 kubelet[2748]: I1101 00:42:53.175060 2748 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:42:53.178481 kubelet[2748]: I1101 00:42:53.178452 2748 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:42:53.179263 kubelet[2748]: I1101 00:42:53.179217 2748 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:42:53.179506 kubelet[2748]: I1101 00:42:53.179255 2748 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-28","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:42:53.179651 kubelet[2748]: I1101 00:42:53.179519 2748 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:42:53.179651 kubelet[2748]: I1101 00:42:53.179534 2748 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:42:53.179651 kubelet[2748]: I1101 00:42:53.179596 2748 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:42:53.179788 kubelet[2748]: I1101 00:42:53.179757 2748 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:42:53.179840 kubelet[2748]: I1101 00:42:53.179792 2748 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:42:53.179840 kubelet[2748]: I1101 00:42:53.179817 2748 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:42:53.179840 kubelet[2748]: I1101 00:42:53.179831 2748 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:42:53.188427 kubelet[2748]: I1101 00:42:53.181976 2748 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:42:53.188427 kubelet[2748]: I1101 00:42:53.182580 2748 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:42:53.188427 kubelet[2748]: I1101 00:42:53.183114 2748 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:42:53.188427 kubelet[2748]: I1101 00:42:53.183148 2748 server.go:1287] "Started kubelet" Nov 1 00:42:53.243658 kubelet[2748]: I1101 00:42:53.243623 2748 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:42:53.244674 kubelet[2748]: I1101 00:42:53.244656 2748 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:42:53.246000 audit[2748]: AVC avc: denied { mac_admin } for pid=2748 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:42:53.247608 kubelet[2748]: I1101 00:42:53.247578 2748 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Nov 1 00:42:53.247707 kubelet[2748]: I1101 00:42:53.247696 2748 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Nov 1 00:42:53.247790 kubelet[2748]: I1101 00:42:53.247780 2748 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:42:53.253409 kernel: audit: type=1400 audit(1761957773.246:223): avc: denied { mac_admin } for pid=2748 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:42:53.246000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:42:53.258402 kernel: audit: type=1401 audit(1761957773.246:223): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:42:53.246000 audit[2748]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000be11d0 a1=c000b03d40 a2=c000be11a0 a3=25 items=0 ppid=1 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:53.268548 kernel: audit: type=1300 audit(1761957773.246:223): arch=c000003e syscall=188 success=no exit=-22 a0=c000be11d0 a1=c000b03d40 a2=c000be11a0 a3=25 items=0 ppid=1 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:53.270681 kubelet[2748]: I1101 00:42:53.270646 2748 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:42:53.246000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:42:53.282664 kubelet[2748]: I1101 00:42:53.281797 2748 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:42:53.284179 kernel: audit: type=1327 audit(1761957773.246:223): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:42:53.284298 kubelet[2748]: I1101 00:42:53.270911 2748 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:42:53.246000 audit[2748]: AVC avc: denied { mac_admin } for pid=2748 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:42:53.293372 kernel: audit: type=1400 audit(1761957773.246:224): avc: denied { mac_admin } for pid=2748 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:42:53.246000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:42:53.246000 audit[2748]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b63820 a1=c000b03d58 a2=c000be1260 a3=25 items=0 ppid=1 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:53.306690 kernel: audit: type=1401 audit(1761957773.246:224): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:42:53.306823 kernel: audit: type=1300 audit(1761957773.246:224): arch=c000003e syscall=188 success=no exit=-22 a0=c000b63820 a1=c000b03d58 a2=c000be1260 a3=25 items=0 ppid=1 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:53.246000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:42:53.316117 kubelet[2748]: I1101 00:42:53.316077 2748 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:42:53.316288 kubelet[2748]: I1101 00:42:53.316200 2748 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:42:53.321612 kubelet[2748]: I1101 00:42:53.321577 2748 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:42:53.328936 kubelet[2748]: I1101 00:42:53.328904 2748 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:42:53.328936 kubelet[2748]: I1101 00:42:53.328931 2748 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:42:53.329646 kubelet[2748]: I1101 00:42:53.329576 2748 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:42:53.345113 kubelet[2748]: I1101 00:42:53.345072 2748 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:42:53.347474 kubelet[2748]: I1101 00:42:53.347441 2748 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:42:53.347665 kubelet[2748]: I1101 00:42:53.347654 2748 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:42:53.347753 kubelet[2748]: I1101 00:42:53.347743 2748 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:42:53.347802 kubelet[2748]: I1101 00:42:53.347797 2748 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:42:53.347903 kubelet[2748]: E1101 00:42:53.347888 2748 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:42:53.364536 kubelet[2748]: E1101 00:42:53.364159 2748 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:42:53.431930 kubelet[2748]: I1101 00:42:53.431908 2748 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:42:53.432102 kubelet[2748]: I1101 00:42:53.432091 2748 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:42:53.432184 kubelet[2748]: I1101 00:42:53.432177 2748 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:42:53.432447 kubelet[2748]: I1101 00:42:53.432435 2748 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:42:53.432668 kubelet[2748]: I1101 00:42:53.432643 2748 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:42:53.432741 kubelet[2748]: I1101 00:42:53.432734 2748 policy_none.go:49] "None policy: Start" Nov 1 00:42:53.432799 kubelet[2748]: I1101 00:42:53.432787 2748 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:42:53.432853 kubelet[2748]: I1101 00:42:53.432847 2748 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:42:53.433033 kubelet[2748]: I1101 00:42:53.433026 2748 state_mem.go:75] "Updated machine memory state" Nov 1 00:42:53.434574 kubelet[2748]: I1101 00:42:53.434553 2748 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:42:53.433000 audit[2748]: AVC avc: denied { mac_admin } for pid=2748 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:42:53.433000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 00:42:53.433000 audit[2748]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0011deff0 a1=c0011e2558 a2=c0011defc0 a3=25 items=0 ppid=1 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:53.433000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 00:42:53.435033 kubelet[2748]: I1101 00:42:53.434740 2748 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Nov 1 00:42:53.435033 kubelet[2748]: I1101 00:42:53.434983 2748 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:42:53.435132 kubelet[2748]: I1101 00:42:53.435019 2748 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:42:53.435528 kubelet[2748]: I1101 00:42:53.435511 2748 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:42:53.439399 kubelet[2748]: E1101 00:42:53.439185 2748 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:42:53.464820 kubelet[2748]: I1101 00:42:53.464773 2748 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-28" Nov 1 00:42:53.469765 kubelet[2748]: I1101 00:42:53.469479 2748 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-28" Nov 1 00:42:53.472620 kubelet[2748]: I1101 00:42:53.472572 2748 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-19-28" Nov 1 00:42:53.521568 kubelet[2748]: I1101 00:42:53.521527 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5343bca0efed0d80ffb692febdf8386d-ca-certs\") pod \"kube-apiserver-ip-172-31-19-28\" (UID: \"5343bca0efed0d80ffb692febdf8386d\") " pod="kube-system/kube-apiserver-ip-172-31-19-28" Nov 1 00:42:53.521872 kubelet[2748]: I1101 00:42:53.521847 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5343bca0efed0d80ffb692febdf8386d-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-28\" (UID: \"5343bca0efed0d80ffb692febdf8386d\") " pod="kube-system/kube-apiserver-ip-172-31-19-28" Nov 1 00:42:53.521960 kubelet[2748]: I1101 00:42:53.521908 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5343bca0efed0d80ffb692febdf8386d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-28\" (UID: \"5343bca0efed0d80ffb692febdf8386d\") " pod="kube-system/kube-apiserver-ip-172-31-19-28" Nov 1 00:42:53.521960 kubelet[2748]: I1101 00:42:53.521944 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/415079f7411546a4eb301225690dedad-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-28\" (UID: \"415079f7411546a4eb301225690dedad\") " pod="kube-system/kube-controller-manager-ip-172-31-19-28" Nov 1 00:42:53.522084 kubelet[2748]: I1101 00:42:53.521977 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/415079f7411546a4eb301225690dedad-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-28\" (UID: \"415079f7411546a4eb301225690dedad\") " pod="kube-system/kube-controller-manager-ip-172-31-19-28" Nov 1 00:42:53.522084 kubelet[2748]: I1101 00:42:53.522004 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/415079f7411546a4eb301225690dedad-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-28\" (UID: \"415079f7411546a4eb301225690dedad\") " pod="kube-system/kube-controller-manager-ip-172-31-19-28" Nov 1 00:42:53.522084 kubelet[2748]: I1101 00:42:53.522037 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/415079f7411546a4eb301225690dedad-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-28\" (UID: \"415079f7411546a4eb301225690dedad\") " pod="kube-system/kube-controller-manager-ip-172-31-19-28" Nov 1 00:42:53.522084 kubelet[2748]: I1101 00:42:53.522076 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/415079f7411546a4eb301225690dedad-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-28\" (UID: \"415079f7411546a4eb301225690dedad\") " pod="kube-system/kube-controller-manager-ip-172-31-19-28" Nov 1 00:42:53.548947 kubelet[2748]: I1101 00:42:53.548915 2748 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-19-28" Nov 1 00:42:53.559449 kubelet[2748]: I1101 00:42:53.559416 2748 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-19-28" Nov 1 00:42:53.560915 kubelet[2748]: I1101 00:42:53.560898 2748 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-19-28" Nov 1 00:42:53.623741 kubelet[2748]: I1101 00:42:53.623692 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e383cb97e6089233c942856baa17b9c-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-28\" (UID: \"5e383cb97e6089233c942856baa17b9c\") " pod="kube-system/kube-scheduler-ip-172-31-19-28" Nov 1 00:42:54.190948 kubelet[2748]: I1101 00:42:54.190899 2748 apiserver.go:52] "Watching apiserver" Nov 1 00:42:54.218355 kubelet[2748]: I1101 00:42:54.218312 2748 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:42:54.327931 kubelet[2748]: I1101 00:42:54.327762 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-28" podStartSLOduration=1.327739568 podStartE2EDuration="1.327739568s" podCreationTimestamp="2025-11-01 00:42:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:42:54.315965484 +0000 UTC m=+1.361738167" watchObservedRunningTime="2025-11-01 00:42:54.327739568 +0000 UTC m=+1.373512252" Nov 1 00:42:54.344206 kubelet[2748]: I1101 00:42:54.344138 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-28" podStartSLOduration=1.344115795 podStartE2EDuration="1.344115795s" podCreationTimestamp="2025-11-01 00:42:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:42:54.328107836 +0000 UTC m=+1.373880519" watchObservedRunningTime="2025-11-01 00:42:54.344115795 +0000 UTC m=+1.389888468" Nov 1 00:42:54.363627 kubelet[2748]: I1101 00:42:54.363082 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-28" podStartSLOduration=1.3630488920000001 podStartE2EDuration="1.363048892s" podCreationTimestamp="2025-11-01 00:42:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:42:54.345008163 +0000 UTC m=+1.390780847" watchObservedRunningTime="2025-11-01 00:42:54.363048892 +0000 UTC m=+1.408821576" Nov 1 00:42:54.385825 kubelet[2748]: I1101 00:42:54.383168 2748 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-19-28" Nov 1 00:42:54.385825 kubelet[2748]: I1101 00:42:54.383547 2748 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-19-28" Nov 1 00:42:54.393697 kubelet[2748]: E1101 00:42:54.393645 2748 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-19-28\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-19-28" Nov 1 00:42:54.398630 kubelet[2748]: E1101 00:42:54.398592 2748 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-19-28\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-28" Nov 1 00:42:56.471324 kubelet[2748]: I1101 00:42:56.471286 2748 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:42:56.472170 env[1822]: time="2025-11-01T00:42:56.472110951Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:42:56.472611 kubelet[2748]: I1101 00:42:56.472436 2748 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:42:57.147939 kubelet[2748]: I1101 00:42:57.147890 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c9aef23-67a4-40f3-beb7-4ff8d963fb87-lib-modules\") pod \"kube-proxy-r42v5\" (UID: \"3c9aef23-67a4-40f3-beb7-4ff8d963fb87\") " pod="kube-system/kube-proxy-r42v5" Nov 1 00:42:57.148204 kubelet[2748]: I1101 00:42:57.148155 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c9aef23-67a4-40f3-beb7-4ff8d963fb87-kube-proxy\") pod \"kube-proxy-r42v5\" (UID: \"3c9aef23-67a4-40f3-beb7-4ff8d963fb87\") " pod="kube-system/kube-proxy-r42v5" Nov 1 00:42:57.148313 kubelet[2748]: I1101 00:42:57.148302 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c9aef23-67a4-40f3-beb7-4ff8d963fb87-xtables-lock\") pod \"kube-proxy-r42v5\" (UID: \"3c9aef23-67a4-40f3-beb7-4ff8d963fb87\") " pod="kube-system/kube-proxy-r42v5" Nov 1 00:42:57.148436 kubelet[2748]: I1101 00:42:57.148416 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx5vk\" (UniqueName: \"kubernetes.io/projected/3c9aef23-67a4-40f3-beb7-4ff8d963fb87-kube-api-access-zx5vk\") pod \"kube-proxy-r42v5\" (UID: \"3c9aef23-67a4-40f3-beb7-4ff8d963fb87\") " pod="kube-system/kube-proxy-r42v5" Nov 1 00:42:57.255067 kubelet[2748]: E1101 00:42:57.255030 2748 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 1 00:42:57.255067 kubelet[2748]: E1101 00:42:57.255062 2748 projected.go:194] Error preparing data for projected volume kube-api-access-zx5vk for pod kube-system/kube-proxy-r42v5: configmap "kube-root-ca.crt" not found Nov 1 00:42:57.255241 kubelet[2748]: E1101 00:42:57.255120 2748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3c9aef23-67a4-40f3-beb7-4ff8d963fb87-kube-api-access-zx5vk podName:3c9aef23-67a4-40f3-beb7-4ff8d963fb87 nodeName:}" failed. No retries permitted until 2025-11-01 00:42:57.755102178 +0000 UTC m=+4.800874838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zx5vk" (UniqueName: "kubernetes.io/projected/3c9aef23-67a4-40f3-beb7-4ff8d963fb87-kube-api-access-zx5vk") pod "kube-proxy-r42v5" (UID: "3c9aef23-67a4-40f3-beb7-4ff8d963fb87") : configmap "kube-root-ca.crt" not found Nov 1 00:42:57.551722 kubelet[2748]: I1101 00:42:57.551663 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1f4c20d6-8a41-43e6-9034-640fb1ea2a85-var-lib-calico\") pod \"tigera-operator-7dcd859c48-g5vjn\" (UID: \"1f4c20d6-8a41-43e6-9034-640fb1ea2a85\") " pod="tigera-operator/tigera-operator-7dcd859c48-g5vjn" Nov 1 00:42:57.552500 kubelet[2748]: I1101 00:42:57.551771 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkr7n\" (UniqueName: \"kubernetes.io/projected/1f4c20d6-8a41-43e6-9034-640fb1ea2a85-kube-api-access-jkr7n\") pod \"tigera-operator-7dcd859c48-g5vjn\" (UID: \"1f4c20d6-8a41-43e6-9034-640fb1ea2a85\") " pod="tigera-operator/tigera-operator-7dcd859c48-g5vjn" Nov 1 00:42:57.658914 kubelet[2748]: I1101 00:42:57.658875 2748 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 00:42:57.820754 env[1822]: time="2025-11-01T00:42:57.820371045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-g5vjn,Uid:1f4c20d6-8a41-43e6-9034-640fb1ea2a85,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:42:57.852597 env[1822]: time="2025-11-01T00:42:57.852505384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:42:57.852597 env[1822]: time="2025-11-01T00:42:57.852558717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:42:57.852922 env[1822]: time="2025-11-01T00:42:57.852574392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:42:57.852922 env[1822]: time="2025-11-01T00:42:57.852802658Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/702d13ff3fda0ec2f8fde99e6c07478e012c2b011e358dc7f0de1609c9a6184a pid=2799 runtime=io.containerd.runc.v2 Nov 1 00:42:57.947464 env[1822]: time="2025-11-01T00:42:57.947419638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-g5vjn,Uid:1f4c20d6-8a41-43e6-9034-640fb1ea2a85,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"702d13ff3fda0ec2f8fde99e6c07478e012c2b011e358dc7f0de1609c9a6184a\"" Nov 1 00:42:57.949915 env[1822]: time="2025-11-01T00:42:57.949881122Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:42:57.999997 env[1822]: time="2025-11-01T00:42:57.999874149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r42v5,Uid:3c9aef23-67a4-40f3-beb7-4ff8d963fb87,Namespace:kube-system,Attempt:0,}" Nov 1 00:42:58.040028 env[1822]: time="2025-11-01T00:42:58.039925371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:42:58.040221 env[1822]: time="2025-11-01T00:42:58.040041578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:42:58.040221 env[1822]: time="2025-11-01T00:42:58.040073596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:42:58.040456 env[1822]: time="2025-11-01T00:42:58.040378680Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/056aac19c9971835c34e1c2baa415aed2818971877f17a84702b4b13910a362b pid=2840 runtime=io.containerd.runc.v2 Nov 1 00:42:58.087480 env[1822]: time="2025-11-01T00:42:58.086815236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r42v5,Uid:3c9aef23-67a4-40f3-beb7-4ff8d963fb87,Namespace:kube-system,Attempt:0,} returns sandbox id \"056aac19c9971835c34e1c2baa415aed2818971877f17a84702b4b13910a362b\"" Nov 1 00:42:58.090898 env[1822]: time="2025-11-01T00:42:58.090849214Z" level=info msg="CreateContainer within sandbox \"056aac19c9971835c34e1c2baa415aed2818971877f17a84702b4b13910a362b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:42:58.119731 env[1822]: time="2025-11-01T00:42:58.119670218Z" level=info msg="CreateContainer within sandbox \"056aac19c9971835c34e1c2baa415aed2818971877f17a84702b4b13910a362b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"11446542edfae5adce21cec60ac290c1b72f4cccb4a75c1c43aa096528c3cde6\"" Nov 1 00:42:58.122116 env[1822]: time="2025-11-01T00:42:58.120593657Z" level=info msg="StartContainer for \"11446542edfae5adce21cec60ac290c1b72f4cccb4a75c1c43aa096528c3cde6\"" Nov 1 00:42:58.183641 env[1822]: time="2025-11-01T00:42:58.183585007Z" level=info msg="StartContainer for \"11446542edfae5adce21cec60ac290c1b72f4cccb4a75c1c43aa096528c3cde6\" returns successfully" Nov 1 00:42:58.674514 systemd[1]: run-containerd-runc-k8s.io-702d13ff3fda0ec2f8fde99e6c07478e012c2b011e358dc7f0de1609c9a6184a-runc.DAMj3g.mount: Deactivated successfully. Nov 1 00:42:59.425532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount634954573.mount: Deactivated successfully. Nov 1 00:42:59.684486 kernel: kauditd_printk_skb: 5 callbacks suppressed Nov 1 00:42:59.684651 kernel: audit: type=1325 audit(1761957779.675:226): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.675000 audit[2941]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.706312 kernel: audit: type=1300 audit(1761957779.675:226): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffff6513970 a2=0 a3=7ffff651395c items=0 ppid=2891 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.675000 audit[2941]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffff6513970 a2=0 a3=7ffff651395c items=0 ppid=2891 pid=2941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.675000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:42:59.713463 kernel: audit: type=1327 audit(1761957779.675:226): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:42:59.677000 audit[2942]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:42:59.720489 kernel: audit: type=1325 audit(1761957779.677:227): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2942 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:42:59.677000 audit[2942]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffebb732020 a2=0 a3=7ffebb73200c items=0 ppid=2891 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.732374 kernel: audit: type=1300 audit(1761957779.677:227): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffebb732020 a2=0 a3=7ffebb73200c items=0 ppid=2891 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.732536 kernel: audit: type=1327 audit(1761957779.677:227): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:42:59.677000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 00:42:59.679000 audit[2943]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.743933 kernel: audit: type=1325 audit(1761957779.679:228): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.679000 audit[2943]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9b8057d0 a2=0 a3=7ffd9b8057bc items=0 ppid=2891 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.755849 kernel: audit: type=1300 audit(1761957779.679:228): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd9b8057d0 a2=0 a3=7ffd9b8057bc items=0 ppid=2891 pid=2943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.679000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 00:42:59.761796 kernel: audit: type=1327 audit(1761957779.679:228): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 00:42:59.681000 audit[2944]: NETFILTER_CFG table=nat:41 family=10 entries=1 op=nft_register_chain pid=2944 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:42:59.770375 kernel: audit: type=1325 audit(1761957779.681:229): table=nat:41 family=10 entries=1 op=nft_register_chain pid=2944 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:42:59.681000 audit[2944]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe42b9a380 a2=0 a3=7ffe42b9a36c items=0 ppid=2891 pid=2944 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.681000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 00:42:59.682000 audit[2945]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_chain pid=2945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.682000 audit[2945]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd5fb33e90 a2=0 a3=7ffd5fb33e7c items=0 ppid=2891 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.682000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 00:42:59.685000 audit[2946]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2946 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:42:59.685000 audit[2946]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe4882ad70 a2=0 a3=7ffe4882ad5c items=0 ppid=2891 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.685000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 00:42:59.791000 audit[2947]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.791000 audit[2947]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc00f58550 a2=0 a3=7ffc00f5853c items=0 ppid=2891 pid=2947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.791000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Nov 1 00:42:59.797000 audit[2949]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.797000 audit[2949]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc326a5020 a2=0 a3=7ffc326a500c items=0 ppid=2891 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.797000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Nov 1 00:42:59.804000 audit[2952]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.804000 audit[2952]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdf8e7a760 a2=0 a3=7ffdf8e7a74c items=0 ppid=2891 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.804000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Nov 1 00:42:59.806000 audit[2953]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.806000 audit[2953]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf4339d00 a2=0 a3=7ffdf4339cec items=0 ppid=2891 pid=2953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.806000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Nov 1 00:42:59.812000 audit[2955]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.812000 audit[2955]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdffe9df50 a2=0 a3=7ffdffe9df3c items=0 ppid=2891 pid=2955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.812000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Nov 1 00:42:59.814000 audit[2956]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2956 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.814000 audit[2956]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd82f6bde0 a2=0 a3=7ffd82f6bdcc items=0 ppid=2891 pid=2956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.814000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Nov 1 00:42:59.819000 audit[2958]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2958 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.819000 audit[2958]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd185188c0 a2=0 a3=7ffd185188ac items=0 ppid=2891 pid=2958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Nov 1 00:42:59.825000 audit[2961]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.825000 audit[2961]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdd1e33020 a2=0 a3=7ffdd1e3300c items=0 ppid=2891 pid=2961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.825000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Nov 1 00:42:59.827000 audit[2962]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.827000 audit[2962]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8f691300 a2=0 a3=7ffe8f6912ec items=0 ppid=2891 pid=2962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.827000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Nov 1 00:42:59.832000 audit[2964]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.832000 audit[2964]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffca3bde2a0 a2=0 a3=7ffca3bde28c items=0 ppid=2891 pid=2964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.832000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Nov 1 00:42:59.834000 audit[2965]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.834000 audit[2965]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffd84f1d50 a2=0 a3=7fffd84f1d3c items=0 ppid=2891 pid=2965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.834000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Nov 1 00:42:59.838000 audit[2967]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2967 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.838000 audit[2967]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe2f32eb40 a2=0 a3=7ffe2f32eb2c items=0 ppid=2891 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.838000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 00:42:59.844000 audit[2970]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.844000 audit[2970]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffde7f20490 a2=0 a3=7ffde7f2047c items=0 ppid=2891 pid=2970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.844000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 00:42:59.850000 audit[2973]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2973 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.850000 audit[2973]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff4c6e6fd0 a2=0 a3=7fff4c6e6fbc items=0 ppid=2891 pid=2973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.850000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Nov 1 00:42:59.852000 audit[2974]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2974 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.852000 audit[2974]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffca5253c80 a2=0 a3=7ffca5253c6c items=0 ppid=2891 pid=2974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.852000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Nov 1 00:42:59.857000 audit[2976]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2976 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.857000 audit[2976]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe47ca9340 a2=0 a3=7ffe47ca932c items=0 ppid=2891 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.857000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:42:59.863000 audit[2979]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.863000 audit[2979]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe53379a20 a2=0 a3=7ffe53379a0c items=0 ppid=2891 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.863000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:42:59.865000 audit[2980]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2980 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.865000 audit[2980]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd12ff1540 a2=0 a3=7ffd12ff152c items=0 ppid=2891 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.865000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Nov 1 00:42:59.870000 audit[2982]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2982 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 00:42:59.870000 audit[2982]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffea0003d20 a2=0 a3=7ffea0003d0c items=0 ppid=2891 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.870000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Nov 1 00:42:59.921000 audit[2988]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2988 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:42:59.921000 audit[2988]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd874cb3d0 a2=0 a3=7ffd874cb3bc items=0 ppid=2891 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.921000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:42:59.935000 audit[2988]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2988 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:42:59.935000 audit[2988]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd874cb3d0 a2=0 a3=7ffd874cb3bc items=0 ppid=2891 pid=2988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.935000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:42:59.941000 audit[2993]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2993 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:42:59.941000 audit[2993]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd90a954c0 a2=0 a3=7ffd90a954ac items=0 ppid=2891 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.941000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Nov 1 00:42:59.945000 audit[2995]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2995 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:42:59.945000 audit[2995]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff29685f20 a2=0 a3=7fff29685f0c items=0 ppid=2891 pid=2995 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.945000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Nov 1 00:42:59.952000 audit[2998]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:42:59.952000 audit[2998]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffee74b29a0 a2=0 a3=7ffee74b298c items=0 ppid=2891 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.952000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Nov 1 00:42:59.955000 audit[2999]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2999 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:42:59.955000 audit[2999]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff84e18d70 a2=0 a3=7fff84e18d5c items=0 ppid=2891 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.955000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Nov 1 00:42:59.959000 audit[3001]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=3001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:42:59.959000 audit[3001]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcf114a5e0 a2=0 a3=7ffcf114a5cc items=0 ppid=2891 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.959000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Nov 1 00:42:59.961000 audit[3002]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=3002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:42:59.961000 audit[3002]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd8f20c00 a2=0 a3=7fffd8f20bec items=0 ppid=2891 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.961000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Nov 1 00:42:59.965000 audit[3004]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=3004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:42:59.965000 audit[3004]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcdcad7d50 a2=0 a3=7ffcdcad7d3c items=0 ppid=2891 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.965000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Nov 1 00:42:59.972000 audit[3007]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=3007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:42:59.972000 audit[3007]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fffc37fde40 a2=0 a3=7fffc37fde2c items=0 ppid=2891 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.972000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Nov 1 00:42:59.974000 audit[3008]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:42:59.974000 audit[3008]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4cd397e0 a2=0 a3=7ffe4cd397cc items=0 ppid=2891 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.974000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Nov 1 00:42:59.981000 audit[3010]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:42:59.981000 audit[3010]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdc75adec0 a2=0 a3=7ffdc75adeac items=0 ppid=2891 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.981000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Nov 1 00:42:59.984000 audit[3011]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=3011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:42:59.984000 audit[3011]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd659e20c0 a2=0 a3=7ffd659e20ac items=0 ppid=2891 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.984000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Nov 1 00:42:59.988000 audit[3013]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=3013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:42:59.988000 audit[3013]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffffbcc7b90 a2=0 a3=7ffffbcc7b7c items=0 ppid=2891 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.988000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 00:42:59.994000 audit[3016]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:42:59.994000 audit[3016]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcadf291a0 a2=0 a3=7ffcadf2918c items=0 ppid=2891 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:42:59.994000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Nov 1 00:43:00.002000 audit[3019]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=3019 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:00.002000 audit[3019]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff12d71ce0 a2=0 a3=7fff12d71ccc items=0 ppid=2891 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:00.002000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Nov 1 00:43:00.004000 audit[3020]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=3020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:00.004000 audit[3020]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe2b781db0 a2=0 a3=7ffe2b781d9c items=0 ppid=2891 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:00.004000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Nov 1 00:43:00.009000 audit[3022]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=3022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:00.009000 audit[3022]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffe8f501990 a2=0 a3=7ffe8f50197c items=0 ppid=2891 pid=3022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:00.009000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:43:00.018000 audit[3025]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=3025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:00.018000 audit[3025]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe600d2670 a2=0 a3=7ffe600d265c items=0 ppid=2891 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:00.018000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 00:43:00.021000 audit[3026]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3026 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:00.021000 audit[3026]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd69155930 a2=0 a3=7ffd6915591c items=0 ppid=2891 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:00.021000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Nov 1 00:43:00.025000 audit[3028]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:00.025000 audit[3028]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fffbd176df0 a2=0 a3=7fffbd176ddc items=0 ppid=2891 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:00.025000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Nov 1 00:43:00.027000 audit[3029]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3029 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:00.027000 audit[3029]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc587f9e70 a2=0 a3=7ffc587f9e5c items=0 ppid=2891 pid=3029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:00.027000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Nov 1 00:43:00.031000 audit[3031]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3031 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:00.031000 audit[3031]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffea4db3a40 a2=0 a3=7ffea4db3a2c items=0 ppid=2891 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:00.031000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:43:00.037000 audit[3034]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3034 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 00:43:00.037000 audit[3034]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd2d12a970 a2=0 a3=7ffd2d12a95c items=0 ppid=2891 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:00.037000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 00:43:00.053000 audit[3036]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=3036 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 00:43:00.053000 audit[3036]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff69099070 a2=0 a3=7fff6909905c items=0 ppid=2891 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:00.053000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:00.054000 audit[3036]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=3036 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 00:43:00.054000 audit[3036]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff69099070 a2=0 a3=7fff6909905c items=0 ppid=2891 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:00.054000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:00.156008 kubelet[2748]: I1101 00:43:00.155937 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r42v5" podStartSLOduration=3.155914265 podStartE2EDuration="3.155914265s" podCreationTimestamp="2025-11-01 00:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:42:58.405608844 +0000 UTC m=+5.451381527" watchObservedRunningTime="2025-11-01 00:43:00.155914265 +0000 UTC m=+7.201686949" Nov 1 00:43:00.575604 env[1822]: time="2025-11-01T00:43:00.575541469Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:00.581115 env[1822]: time="2025-11-01T00:43:00.580717578Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:00.583897 env[1822]: time="2025-11-01T00:43:00.583849480Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:00.586948 env[1822]: time="2025-11-01T00:43:00.586902280Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:00.587588 env[1822]: time="2025-11-01T00:43:00.587547836Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 00:43:00.592404 env[1822]: time="2025-11-01T00:43:00.592364211Z" level=info msg="CreateContainer within sandbox \"702d13ff3fda0ec2f8fde99e6c07478e012c2b011e358dc7f0de1609c9a6184a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:43:00.614288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2073585400.mount: Deactivated successfully. Nov 1 00:43:00.625415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2093923816.mount: Deactivated successfully. Nov 1 00:43:00.634765 env[1822]: time="2025-11-01T00:43:00.634686160Z" level=info msg="CreateContainer within sandbox \"702d13ff3fda0ec2f8fde99e6c07478e012c2b011e358dc7f0de1609c9a6184a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"957fccbf7989e7e241a2f7e298f3ced59afae2b711abe6d0ff09433f5c8280c2\"" Nov 1 00:43:00.636448 env[1822]: time="2025-11-01T00:43:00.635500829Z" level=info msg="StartContainer for \"957fccbf7989e7e241a2f7e298f3ced59afae2b711abe6d0ff09433f5c8280c2\"" Nov 1 00:43:00.703365 env[1822]: time="2025-11-01T00:43:00.698778732Z" level=info msg="StartContainer for \"957fccbf7989e7e241a2f7e298f3ced59afae2b711abe6d0ff09433f5c8280c2\" returns successfully" Nov 1 00:43:02.360295 kubelet[2748]: I1101 00:43:02.360208 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-g5vjn" podStartSLOduration=2.719007623 podStartE2EDuration="5.360171278s" podCreationTimestamp="2025-11-01 00:42:57 +0000 UTC" firstStartedPulling="2025-11-01 00:42:57.949234211 +0000 UTC m=+4.995006871" lastFinishedPulling="2025-11-01 00:43:00.590397852 +0000 UTC m=+7.636170526" observedRunningTime="2025-11-01 00:43:01.432814303 +0000 UTC m=+8.478586985" watchObservedRunningTime="2025-11-01 00:43:02.360171278 +0000 UTC m=+9.405943961" Nov 1 00:43:03.956669 update_engine[1804]: I1101 00:43:03.956138 1804 update_attempter.cc:509] Updating boot flags... Nov 1 00:43:09.375299 sudo[2118]: pam_unix(sudo:session): session closed for user root Nov 1 00:43:09.386396 kernel: kauditd_printk_skb: 143 callbacks suppressed Nov 1 00:43:09.386545 kernel: audit: type=1106 audit(1761957789.374:277): pid=2118 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:43:09.374000 audit[2118]: USER_END pid=2118 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:43:09.374000 audit[2118]: CRED_DISP pid=2118 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:43:09.396556 kernel: audit: type=1104 audit(1761957789.374:278): pid=2118 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 00:43:09.417775 sshd[2114]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:09.437712 kernel: audit: type=1106 audit(1761957789.423:279): pid=2114 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:09.423000 audit[2114]: USER_END pid=2114 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:09.444116 systemd[1]: sshd@6-172.31.19.28:22-147.75.109.163:58138.service: Deactivated successfully. Nov 1 00:43:09.445732 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:43:09.446826 systemd-logind[1803]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:43:09.448621 systemd-logind[1803]: Removed session 7. Nov 1 00:43:09.423000 audit[2114]: CRED_DISP pid=2114 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:09.467380 kernel: audit: type=1104 audit(1761957789.423:280): pid=2114 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:09.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.19.28:22-147.75.109.163:58138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:09.476503 kernel: audit: type=1131 audit(1761957789.443:281): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.19.28:22-147.75.109.163:58138 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:10.473000 audit[3218]: NETFILTER_CFG table=filter:89 family=2 entries=14 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:10.493041 kernel: audit: type=1325 audit(1761957790.473:282): table=filter:89 family=2 entries=14 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:10.493190 kernel: audit: type=1300 audit(1761957790.473:282): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe4b5d31a0 a2=0 a3=7ffe4b5d318c items=0 ppid=2891 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:10.473000 audit[3218]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe4b5d31a0 a2=0 a3=7ffe4b5d318c items=0 ppid=2891 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:10.473000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:10.514383 kernel: audit: type=1327 audit(1761957790.473:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:10.498000 audit[3218]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:10.525366 kernel: audit: type=1325 audit(1761957790.498:283): table=nat:90 family=2 entries=12 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:10.498000 audit[3218]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe4b5d31a0 a2=0 a3=0 items=0 ppid=2891 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:10.546370 kernel: audit: type=1300 audit(1761957790.498:283): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe4b5d31a0 a2=0 a3=0 items=0 ppid=2891 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:10.498000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:10.518000 audit[3220]: NETFILTER_CFG table=filter:91 family=2 entries=15 op=nft_register_rule pid=3220 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:10.518000 audit[3220]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe66cf8be0 a2=0 a3=7ffe66cf8bcc items=0 ppid=2891 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:10.518000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:10.533000 audit[3220]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=3220 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:10.533000 audit[3220]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe66cf8be0 a2=0 a3=0 items=0 ppid=2891 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:10.533000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:13.791000 audit[3222]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=3222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:13.791000 audit[3222]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe599131f0 a2=0 a3=7ffe599131dc items=0 ppid=2891 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:13.791000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:13.797000 audit[3222]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=3222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:13.797000 audit[3222]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe599131f0 a2=0 a3=0 items=0 ppid=2891 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:13.797000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:13.818000 audit[3224]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:13.818000 audit[3224]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffca8479290 a2=0 a3=7ffca847927c items=0 ppid=2891 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:13.818000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:13.824000 audit[3224]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:13.824000 audit[3224]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffca8479290 a2=0 a3=0 items=0 ppid=2891 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:13.824000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:14.870515 kernel: kauditd_printk_skb: 19 callbacks suppressed Nov 1 00:43:14.870701 kernel: audit: type=1325 audit(1761957794.860:290): table=filter:97 family=2 entries=19 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:14.860000 audit[3226]: NETFILTER_CFG table=filter:97 family=2 entries=19 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:14.860000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd284a9b70 a2=0 a3=7ffd284a9b5c items=0 ppid=2891 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:14.860000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:14.888817 kernel: audit: type=1300 audit(1761957794.860:290): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd284a9b70 a2=0 a3=7ffd284a9b5c items=0 ppid=2891 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:14.888957 kernel: audit: type=1327 audit(1761957794.860:290): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:14.888000 audit[3226]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:14.894374 kernel: audit: type=1325 audit(1761957794.888:291): table=nat:98 family=2 entries=12 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:14.888000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd284a9b70 a2=0 a3=0 items=0 ppid=2891 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:14.888000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:14.918679 kernel: audit: type=1300 audit(1761957794.888:291): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd284a9b70 a2=0 a3=0 items=0 ppid=2891 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:14.918828 kernel: audit: type=1327 audit(1761957794.888:291): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:16.043000 audit[3228]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:16.050368 kernel: audit: type=1325 audit(1761957796.043:292): table=filter:99 family=2 entries=21 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:16.043000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff3ecfa420 a2=0 a3=7fff3ecfa40c items=0 ppid=2891 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:16.061370 kernel: audit: type=1300 audit(1761957796.043:292): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff3ecfa420 a2=0 a3=7fff3ecfa40c items=0 ppid=2891 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:16.043000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:16.072000 audit[3228]: NETFILTER_CFG table=nat:100 family=2 entries=12 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:16.080870 kernel: audit: type=1327 audit(1761957796.043:292): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:16.080997 kernel: audit: type=1325 audit(1761957796.072:293): table=nat:100 family=2 entries=12 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:16.072000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff3ecfa420 a2=0 a3=0 items=0 ppid=2891 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:16.072000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:16.235393 kubelet[2748]: I1101 00:43:16.235313 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cb6a2793-3072-4fa6-ba5b-f794f0be6d3b-typha-certs\") pod \"calico-typha-5cdf694c78-whl26\" (UID: \"cb6a2793-3072-4fa6-ba5b-f794f0be6d3b\") " pod="calico-system/calico-typha-5cdf694c78-whl26" Nov 1 00:43:16.235393 kubelet[2748]: I1101 00:43:16.235377 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb6a2793-3072-4fa6-ba5b-f794f0be6d3b-tigera-ca-bundle\") pod \"calico-typha-5cdf694c78-whl26\" (UID: \"cb6a2793-3072-4fa6-ba5b-f794f0be6d3b\") " pod="calico-system/calico-typha-5cdf694c78-whl26" Nov 1 00:43:16.235393 kubelet[2748]: I1101 00:43:16.235401 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qr4k\" (UniqueName: \"kubernetes.io/projected/cb6a2793-3072-4fa6-ba5b-f794f0be6d3b-kube-api-access-5qr4k\") pod \"calico-typha-5cdf694c78-whl26\" (UID: \"cb6a2793-3072-4fa6-ba5b-f794f0be6d3b\") " pod="calico-system/calico-typha-5cdf694c78-whl26" Nov 1 00:43:16.336568 kubelet[2748]: I1101 00:43:16.336524 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0397250d-626a-441b-9239-b5ce25052fff-cni-log-dir\") pod \"calico-node-8q4b9\" (UID: \"0397250d-626a-441b-9239-b5ce25052fff\") " pod="calico-system/calico-node-8q4b9" Nov 1 00:43:16.336772 kubelet[2748]: I1101 00:43:16.336613 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0397250d-626a-441b-9239-b5ce25052fff-node-certs\") pod \"calico-node-8q4b9\" (UID: \"0397250d-626a-441b-9239-b5ce25052fff\") " pod="calico-system/calico-node-8q4b9" Nov 1 00:43:16.336772 kubelet[2748]: I1101 00:43:16.336675 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0397250d-626a-441b-9239-b5ce25052fff-xtables-lock\") pod \"calico-node-8q4b9\" (UID: \"0397250d-626a-441b-9239-b5ce25052fff\") " pod="calico-system/calico-node-8q4b9" Nov 1 00:43:16.336772 kubelet[2748]: I1101 00:43:16.336717 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0397250d-626a-441b-9239-b5ce25052fff-var-run-calico\") pod \"calico-node-8q4b9\" (UID: \"0397250d-626a-441b-9239-b5ce25052fff\") " pod="calico-system/calico-node-8q4b9" Nov 1 00:43:16.336772 kubelet[2748]: I1101 00:43:16.336756 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0397250d-626a-441b-9239-b5ce25052fff-cni-net-dir\") pod \"calico-node-8q4b9\" (UID: \"0397250d-626a-441b-9239-b5ce25052fff\") " pod="calico-system/calico-node-8q4b9" Nov 1 00:43:16.336974 kubelet[2748]: I1101 00:43:16.336780 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfblv\" (UniqueName: \"kubernetes.io/projected/0397250d-626a-441b-9239-b5ce25052fff-kube-api-access-rfblv\") pod \"calico-node-8q4b9\" (UID: \"0397250d-626a-441b-9239-b5ce25052fff\") " pod="calico-system/calico-node-8q4b9" Nov 1 00:43:16.336974 kubelet[2748]: I1101 00:43:16.336812 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0397250d-626a-441b-9239-b5ce25052fff-flexvol-driver-host\") pod \"calico-node-8q4b9\" (UID: \"0397250d-626a-441b-9239-b5ce25052fff\") " pod="calico-system/calico-node-8q4b9" Nov 1 00:43:16.336974 kubelet[2748]: I1101 00:43:16.336835 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0397250d-626a-441b-9239-b5ce25052fff-policysync\") pod \"calico-node-8q4b9\" (UID: \"0397250d-626a-441b-9239-b5ce25052fff\") " pod="calico-system/calico-node-8q4b9" Nov 1 00:43:16.336974 kubelet[2748]: I1101 00:43:16.336890 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0397250d-626a-441b-9239-b5ce25052fff-cni-bin-dir\") pod \"calico-node-8q4b9\" (UID: \"0397250d-626a-441b-9239-b5ce25052fff\") " pod="calico-system/calico-node-8q4b9" Nov 1 00:43:16.336974 kubelet[2748]: I1101 00:43:16.336915 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0397250d-626a-441b-9239-b5ce25052fff-lib-modules\") pod \"calico-node-8q4b9\" (UID: \"0397250d-626a-441b-9239-b5ce25052fff\") " pod="calico-system/calico-node-8q4b9" Nov 1 00:43:16.337193 kubelet[2748]: I1101 00:43:16.336939 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0397250d-626a-441b-9239-b5ce25052fff-tigera-ca-bundle\") pod \"calico-node-8q4b9\" (UID: \"0397250d-626a-441b-9239-b5ce25052fff\") " pod="calico-system/calico-node-8q4b9" Nov 1 00:43:16.337193 kubelet[2748]: I1101 00:43:16.336967 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0397250d-626a-441b-9239-b5ce25052fff-var-lib-calico\") pod \"calico-node-8q4b9\" (UID: \"0397250d-626a-441b-9239-b5ce25052fff\") " pod="calico-system/calico-node-8q4b9" Nov 1 00:43:16.407189 env[1822]: time="2025-11-01T00:43:16.407132654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cdf694c78-whl26,Uid:cb6a2793-3072-4fa6-ba5b-f794f0be6d3b,Namespace:calico-system,Attempt:0,}" Nov 1 00:43:16.431130 env[1822]: time="2025-11-01T00:43:16.431033529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:16.431130 env[1822]: time="2025-11-01T00:43:16.431078052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:16.431130 env[1822]: time="2025-11-01T00:43:16.431093677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:16.431712 env[1822]: time="2025-11-01T00:43:16.431637326Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c742cecf59dc1edf4f24de7cd882563273bc5a00366abca6fe9c6f2e49787bba pid=3237 runtime=io.containerd.runc.v2 Nov 1 00:43:16.469610 kubelet[2748]: E1101 00:43:16.469572 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.469610 kubelet[2748]: W1101 00:43:16.469607 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.470605 kubelet[2748]: E1101 00:43:16.470540 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.473383 kubelet[2748]: E1101 00:43:16.472383 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.473383 kubelet[2748]: W1101 00:43:16.472435 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.473383 kubelet[2748]: E1101 00:43:16.472458 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.505876 kubelet[2748]: E1101 00:43:16.505827 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5lqpx" podUID="9a6bbdac-9f73-4cc6-aadc-84424d8082ea" Nov 1 00:43:16.508240 kubelet[2748]: I1101 00:43:16.508199 2748 status_manager.go:890] "Failed to get status for pod" podUID="9a6bbdac-9f73-4cc6-aadc-84424d8082ea" pod="calico-system/csi-node-driver-5lqpx" err="pods \"csi-node-driver-5lqpx\" is forbidden: User \"system:node:ip-172-31-19-28\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-19-28' and this object" Nov 1 00:43:16.543528 kubelet[2748]: E1101 00:43:16.543485 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.543528 kubelet[2748]: W1101 00:43:16.543522 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.543715 kubelet[2748]: E1101 00:43:16.543546 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.543775 kubelet[2748]: E1101 00:43:16.543755 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.543775 kubelet[2748]: W1101 00:43:16.543772 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.543847 kubelet[2748]: E1101 00:43:16.543782 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.543956 kubelet[2748]: E1101 00:43:16.543943 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.543956 kubelet[2748]: W1101 00:43:16.543953 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.544027 kubelet[2748]: E1101 00:43:16.543962 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.544288 kubelet[2748]: E1101 00:43:16.544259 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.544288 kubelet[2748]: W1101 00:43:16.544281 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.544479 kubelet[2748]: E1101 00:43:16.544300 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.544523 kubelet[2748]: E1101 00:43:16.544515 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.544558 kubelet[2748]: W1101 00:43:16.544523 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.544558 kubelet[2748]: E1101 00:43:16.544532 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.544694 kubelet[2748]: E1101 00:43:16.544681 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.544694 kubelet[2748]: W1101 00:43:16.544692 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.544777 kubelet[2748]: E1101 00:43:16.544699 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.544856 kubelet[2748]: E1101 00:43:16.544846 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.544902 kubelet[2748]: W1101 00:43:16.544857 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.544902 kubelet[2748]: E1101 00:43:16.544865 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.545014 kubelet[2748]: E1101 00:43:16.545002 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.545014 kubelet[2748]: W1101 00:43:16.545014 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.545100 kubelet[2748]: E1101 00:43:16.545021 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.545179 kubelet[2748]: E1101 00:43:16.545168 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.545179 kubelet[2748]: W1101 00:43:16.545178 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.545252 kubelet[2748]: E1101 00:43:16.545186 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.545323 kubelet[2748]: E1101 00:43:16.545313 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.545323 kubelet[2748]: W1101 00:43:16.545323 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.545422 kubelet[2748]: E1101 00:43:16.545330 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.545524 kubelet[2748]: E1101 00:43:16.545473 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.545524 kubelet[2748]: W1101 00:43:16.545481 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.545524 kubelet[2748]: E1101 00:43:16.545487 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.545622 kubelet[2748]: E1101 00:43:16.545618 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.545649 kubelet[2748]: W1101 00:43:16.545625 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.545649 kubelet[2748]: E1101 00:43:16.545631 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.545775 kubelet[2748]: E1101 00:43:16.545764 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.545775 kubelet[2748]: W1101 00:43:16.545774 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.545847 kubelet[2748]: E1101 00:43:16.545781 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.545921 kubelet[2748]: E1101 00:43:16.545909 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.545921 kubelet[2748]: W1101 00:43:16.545920 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.546004 kubelet[2748]: E1101 00:43:16.545926 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.546099 kubelet[2748]: E1101 00:43:16.546048 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.546099 kubelet[2748]: W1101 00:43:16.546055 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.546099 kubelet[2748]: E1101 00:43:16.546062 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.546206 kubelet[2748]: E1101 00:43:16.546194 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.546206 kubelet[2748]: W1101 00:43:16.546199 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.546262 kubelet[2748]: E1101 00:43:16.546207 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.546372 kubelet[2748]: E1101 00:43:16.546360 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.546372 kubelet[2748]: W1101 00:43:16.546370 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.546449 kubelet[2748]: E1101 00:43:16.546378 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.546564 kubelet[2748]: E1101 00:43:16.546506 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.546564 kubelet[2748]: W1101 00:43:16.546514 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.546564 kubelet[2748]: E1101 00:43:16.546520 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.546871 kubelet[2748]: E1101 00:43:16.546833 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.546871 kubelet[2748]: W1101 00:43:16.546844 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.546871 kubelet[2748]: E1101 00:43:16.546855 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.547013 kubelet[2748]: E1101 00:43:16.546990 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.547013 kubelet[2748]: W1101 00:43:16.546997 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.547013 kubelet[2748]: E1101 00:43:16.547005 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.591387 env[1822]: time="2025-11-01T00:43:16.587000334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cdf694c78-whl26,Uid:cb6a2793-3072-4fa6-ba5b-f794f0be6d3b,Namespace:calico-system,Attempt:0,} returns sandbox id \"c742cecf59dc1edf4f24de7cd882563273bc5a00366abca6fe9c6f2e49787bba\"" Nov 1 00:43:16.593911 env[1822]: time="2025-11-01T00:43:16.593873919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:43:16.609264 env[1822]: time="2025-11-01T00:43:16.609227070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8q4b9,Uid:0397250d-626a-441b-9239-b5ce25052fff,Namespace:calico-system,Attempt:0,}" Nov 1 00:43:16.645356 kubelet[2748]: E1101 00:43:16.643871 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.645356 kubelet[2748]: W1101 00:43:16.643898 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.645356 kubelet[2748]: E1101 00:43:16.643923 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.645356 kubelet[2748]: I1101 00:43:16.643964 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kdwn\" (UniqueName: \"kubernetes.io/projected/9a6bbdac-9f73-4cc6-aadc-84424d8082ea-kube-api-access-4kdwn\") pod \"csi-node-driver-5lqpx\" (UID: \"9a6bbdac-9f73-4cc6-aadc-84424d8082ea\") " pod="calico-system/csi-node-driver-5lqpx" Nov 1 00:43:16.645356 kubelet[2748]: E1101 00:43:16.644278 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.645356 kubelet[2748]: W1101 00:43:16.644292 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.645356 kubelet[2748]: E1101 00:43:16.644312 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.645356 kubelet[2748]: I1101 00:43:16.644333 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a6bbdac-9f73-4cc6-aadc-84424d8082ea-kubelet-dir\") pod \"csi-node-driver-5lqpx\" (UID: \"9a6bbdac-9f73-4cc6-aadc-84424d8082ea\") " pod="calico-system/csi-node-driver-5lqpx" Nov 1 00:43:16.645356 kubelet[2748]: E1101 00:43:16.644641 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.645983 kubelet[2748]: W1101 00:43:16.644651 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.645983 kubelet[2748]: E1101 00:43:16.644669 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.645983 kubelet[2748]: E1101 00:43:16.644909 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.645983 kubelet[2748]: W1101 00:43:16.644920 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.645983 kubelet[2748]: E1101 00:43:16.644936 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.645983 kubelet[2748]: E1101 00:43:16.645168 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.645983 kubelet[2748]: W1101 00:43:16.645186 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.645983 kubelet[2748]: E1101 00:43:16.645201 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.645983 kubelet[2748]: I1101 00:43:16.645226 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9a6bbdac-9f73-4cc6-aadc-84424d8082ea-socket-dir\") pod \"csi-node-driver-5lqpx\" (UID: \"9a6bbdac-9f73-4cc6-aadc-84424d8082ea\") " pod="calico-system/csi-node-driver-5lqpx" Nov 1 00:43:16.647509 kubelet[2748]: E1101 00:43:16.646547 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.647509 kubelet[2748]: W1101 00:43:16.646563 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.647509 kubelet[2748]: E1101 00:43:16.646583 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.647509 kubelet[2748]: I1101 00:43:16.646618 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9a6bbdac-9f73-4cc6-aadc-84424d8082ea-registration-dir\") pod \"csi-node-driver-5lqpx\" (UID: \"9a6bbdac-9f73-4cc6-aadc-84424d8082ea\") " pod="calico-system/csi-node-driver-5lqpx" Nov 1 00:43:16.647509 kubelet[2748]: E1101 00:43:16.646893 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.647509 kubelet[2748]: W1101 00:43:16.646905 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.647509 kubelet[2748]: E1101 00:43:16.647004 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.647509 kubelet[2748]: I1101 00:43:16.647033 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9a6bbdac-9f73-4cc6-aadc-84424d8082ea-varrun\") pod \"csi-node-driver-5lqpx\" (UID: \"9a6bbdac-9f73-4cc6-aadc-84424d8082ea\") " pod="calico-system/csi-node-driver-5lqpx" Nov 1 00:43:16.647509 kubelet[2748]: E1101 00:43:16.647267 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.647918 kubelet[2748]: W1101 00:43:16.647287 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.648906 kubelet[2748]: E1101 00:43:16.648004 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.648906 kubelet[2748]: E1101 00:43:16.648083 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.648906 kubelet[2748]: W1101 00:43:16.648092 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.648906 kubelet[2748]: E1101 00:43:16.648110 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.648906 kubelet[2748]: E1101 00:43:16.648692 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.648906 kubelet[2748]: W1101 00:43:16.648704 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.648906 kubelet[2748]: E1101 00:43:16.648723 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.649588 kubelet[2748]: E1101 00:43:16.649391 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.649588 kubelet[2748]: W1101 00:43:16.649406 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.649588 kubelet[2748]: E1101 00:43:16.649424 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.650018 kubelet[2748]: E1101 00:43:16.649876 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.650018 kubelet[2748]: W1101 00:43:16.649899 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.650018 kubelet[2748]: E1101 00:43:16.649913 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.650388 kubelet[2748]: E1101 00:43:16.650369 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.650640 kubelet[2748]: W1101 00:43:16.650480 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.650640 kubelet[2748]: E1101 00:43:16.650500 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.650933 kubelet[2748]: E1101 00:43:16.650921 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.651022 kubelet[2748]: W1101 00:43:16.651011 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.651093 kubelet[2748]: E1101 00:43:16.651083 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.651527 kubelet[2748]: E1101 00:43:16.651514 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.651705 kubelet[2748]: W1101 00:43:16.651690 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.651924 kubelet[2748]: E1101 00:43:16.651909 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.659517 env[1822]: time="2025-11-01T00:43:16.659431078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:16.659772 env[1822]: time="2025-11-01T00:43:16.659739498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:16.659899 env[1822]: time="2025-11-01T00:43:16.659873285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:16.660220 env[1822]: time="2025-11-01T00:43:16.660180918Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/01ab290d7b9394ba0e335b3acbdacc6726e2821a497f1f08af358682750df2a6 pid=3320 runtime=io.containerd.runc.v2 Nov 1 00:43:16.721390 env[1822]: time="2025-11-01T00:43:16.721330698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8q4b9,Uid:0397250d-626a-441b-9239-b5ce25052fff,Namespace:calico-system,Attempt:0,} returns sandbox id \"01ab290d7b9394ba0e335b3acbdacc6726e2821a497f1f08af358682750df2a6\"" Nov 1 00:43:16.748473 kubelet[2748]: E1101 00:43:16.747785 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.748473 kubelet[2748]: W1101 00:43:16.747807 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.748473 kubelet[2748]: E1101 00:43:16.747827 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.748473 kubelet[2748]: E1101 00:43:16.748204 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.748473 kubelet[2748]: W1101 00:43:16.748213 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.748473 kubelet[2748]: E1101 00:43:16.748249 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.749264 kubelet[2748]: E1101 00:43:16.748549 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.749264 kubelet[2748]: W1101 00:43:16.748558 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.749264 kubelet[2748]: E1101 00:43:16.748572 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.749264 kubelet[2748]: E1101 00:43:16.749199 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.749264 kubelet[2748]: W1101 00:43:16.749208 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.749264 kubelet[2748]: E1101 00:43:16.749230 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.753988 kubelet[2748]: E1101 00:43:16.749409 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.753988 kubelet[2748]: W1101 00:43:16.749452 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.753988 kubelet[2748]: E1101 00:43:16.749460 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.753988 kubelet[2748]: E1101 00:43:16.749647 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.753988 kubelet[2748]: W1101 00:43:16.749654 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.753988 kubelet[2748]: E1101 00:43:16.749662 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.753988 kubelet[2748]: E1101 00:43:16.749892 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.753988 kubelet[2748]: W1101 00:43:16.749899 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.753988 kubelet[2748]: E1101 00:43:16.749907 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.753988 kubelet[2748]: E1101 00:43:16.750108 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.754406 kubelet[2748]: W1101 00:43:16.750116 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.754406 kubelet[2748]: E1101 00:43:16.750128 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.754406 kubelet[2748]: E1101 00:43:16.750315 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.754406 kubelet[2748]: W1101 00:43:16.750322 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.754406 kubelet[2748]: E1101 00:43:16.750330 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.754406 kubelet[2748]: E1101 00:43:16.750513 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.754406 kubelet[2748]: W1101 00:43:16.750520 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.754406 kubelet[2748]: E1101 00:43:16.750528 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.754406 kubelet[2748]: E1101 00:43:16.750900 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.754406 kubelet[2748]: W1101 00:43:16.750908 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.754695 kubelet[2748]: E1101 00:43:16.750935 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.754695 kubelet[2748]: E1101 00:43:16.751364 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.754695 kubelet[2748]: W1101 00:43:16.751381 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.754695 kubelet[2748]: E1101 00:43:16.751392 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.754695 kubelet[2748]: E1101 00:43:16.751602 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.754695 kubelet[2748]: W1101 00:43:16.751609 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.754695 kubelet[2748]: E1101 00:43:16.751617 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.754695 kubelet[2748]: E1101 00:43:16.751804 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.754695 kubelet[2748]: W1101 00:43:16.751831 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.754695 kubelet[2748]: E1101 00:43:16.751840 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.754954 kubelet[2748]: E1101 00:43:16.752002 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.754954 kubelet[2748]: W1101 00:43:16.752009 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.754954 kubelet[2748]: E1101 00:43:16.752017 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.754954 kubelet[2748]: E1101 00:43:16.752173 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.754954 kubelet[2748]: W1101 00:43:16.752180 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.754954 kubelet[2748]: E1101 00:43:16.752205 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.754954 kubelet[2748]: E1101 00:43:16.752523 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.754954 kubelet[2748]: W1101 00:43:16.752531 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.754954 kubelet[2748]: E1101 00:43:16.752541 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.754954 kubelet[2748]: E1101 00:43:16.752715 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.755209 kubelet[2748]: W1101 00:43:16.752722 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.755209 kubelet[2748]: E1101 00:43:16.752743 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.755209 kubelet[2748]: E1101 00:43:16.752904 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.755209 kubelet[2748]: W1101 00:43:16.752910 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.755209 kubelet[2748]: E1101 00:43:16.752920 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.755209 kubelet[2748]: E1101 00:43:16.753083 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.755209 kubelet[2748]: W1101 00:43:16.753090 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.755209 kubelet[2748]: E1101 00:43:16.753098 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.755209 kubelet[2748]: E1101 00:43:16.753305 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.755209 kubelet[2748]: W1101 00:43:16.753313 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.755472 kubelet[2748]: E1101 00:43:16.753322 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.755472 kubelet[2748]: E1101 00:43:16.753486 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.755472 kubelet[2748]: W1101 00:43:16.753512 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.755472 kubelet[2748]: E1101 00:43:16.753520 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.755472 kubelet[2748]: E1101 00:43:16.753682 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.755472 kubelet[2748]: W1101 00:43:16.753689 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.755472 kubelet[2748]: E1101 00:43:16.753697 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.755472 kubelet[2748]: E1101 00:43:16.753851 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.755472 kubelet[2748]: W1101 00:43:16.753858 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.755472 kubelet[2748]: E1101 00:43:16.753867 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.756096 kubelet[2748]: E1101 00:43:16.756082 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.756183 kubelet[2748]: W1101 00:43:16.756173 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.756235 kubelet[2748]: E1101 00:43:16.756227 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:16.761310 kubelet[2748]: E1101 00:43:16.761282 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:16.761478 kubelet[2748]: W1101 00:43:16.761368 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:16.761478 kubelet[2748]: E1101 00:43:16.761398 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:17.129000 audit[3384]: NETFILTER_CFG table=filter:101 family=2 entries=22 op=nft_register_rule pid=3384 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:17.129000 audit[3384]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffde7ab8860 a2=0 a3=7ffde7ab884c items=0 ppid=2891 pid=3384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:17.129000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:17.133000 audit[3384]: NETFILTER_CFG table=nat:102 family=2 entries=12 op=nft_register_rule pid=3384 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:17.133000 audit[3384]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffde7ab8860 a2=0 a3=0 items=0 ppid=2891 pid=3384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:17.133000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:17.625426 amazon-ssm-agent[1885]: 2025-11-01 00:43:17 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Nov 1 00:43:17.653197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4206830773.mount: Deactivated successfully. Nov 1 00:43:18.349570 kubelet[2748]: E1101 00:43:18.349520 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5lqpx" podUID="9a6bbdac-9f73-4cc6-aadc-84424d8082ea" Nov 1 00:43:18.780497 env[1822]: time="2025-11-01T00:43:18.780378249Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:18.782827 env[1822]: time="2025-11-01T00:43:18.782782936Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:18.784534 env[1822]: time="2025-11-01T00:43:18.784487838Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:18.785944 env[1822]: time="2025-11-01T00:43:18.785915213Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:18.786376 env[1822]: time="2025-11-01T00:43:18.786332484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 00:43:18.788125 env[1822]: time="2025-11-01T00:43:18.788092769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:43:18.814064 env[1822]: time="2025-11-01T00:43:18.811969465Z" level=info msg="CreateContainer within sandbox \"c742cecf59dc1edf4f24de7cd882563273bc5a00366abca6fe9c6f2e49787bba\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:43:18.833298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3865389700.mount: Deactivated successfully. Nov 1 00:43:18.839973 env[1822]: time="2025-11-01T00:43:18.839913777Z" level=info msg="CreateContainer within sandbox \"c742cecf59dc1edf4f24de7cd882563273bc5a00366abca6fe9c6f2e49787bba\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7323db1fbc2c178470ddede966ba5f29e81e176f08d8e2a0c8d5a375acd015b0\"" Nov 1 00:43:18.840940 env[1822]: time="2025-11-01T00:43:18.840903205Z" level=info msg="StartContainer for \"7323db1fbc2c178470ddede966ba5f29e81e176f08d8e2a0c8d5a375acd015b0\"" Nov 1 00:43:18.943468 env[1822]: time="2025-11-01T00:43:18.943405883Z" level=info msg="StartContainer for \"7323db1fbc2c178470ddede966ba5f29e81e176f08d8e2a0c8d5a375acd015b0\" returns successfully" Nov 1 00:43:19.476417 kubelet[2748]: E1101 00:43:19.476388 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.477089 kubelet[2748]: W1101 00:43:19.476905 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.477089 kubelet[2748]: E1101 00:43:19.476962 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.477351 kubelet[2748]: E1101 00:43:19.477243 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.477351 kubelet[2748]: W1101 00:43:19.477254 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.477351 kubelet[2748]: E1101 00:43:19.477266 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.477532 kubelet[2748]: E1101 00:43:19.477523 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.477584 kubelet[2748]: W1101 00:43:19.477576 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.477632 kubelet[2748]: E1101 00:43:19.477625 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.478118 kubelet[2748]: E1101 00:43:19.478098 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.478237 kubelet[2748]: W1101 00:43:19.478224 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.478295 kubelet[2748]: E1101 00:43:19.478285 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.479109 kubelet[2748]: E1101 00:43:19.479091 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.479207 kubelet[2748]: W1101 00:43:19.479110 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.479520 kubelet[2748]: E1101 00:43:19.479127 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.479826 kubelet[2748]: E1101 00:43:19.479809 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.479917 kubelet[2748]: W1101 00:43:19.479827 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.479917 kubelet[2748]: E1101 00:43:19.479842 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.480125 kubelet[2748]: E1101 00:43:19.480097 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.480125 kubelet[2748]: W1101 00:43:19.480119 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.480243 kubelet[2748]: E1101 00:43:19.480133 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.480432 kubelet[2748]: E1101 00:43:19.480383 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.480432 kubelet[2748]: W1101 00:43:19.480395 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.480432 kubelet[2748]: E1101 00:43:19.480407 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.481587 kubelet[2748]: E1101 00:43:19.480662 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.481587 kubelet[2748]: W1101 00:43:19.480674 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.481587 kubelet[2748]: E1101 00:43:19.480695 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.481587 kubelet[2748]: E1101 00:43:19.480886 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.481587 kubelet[2748]: W1101 00:43:19.480905 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.481587 kubelet[2748]: E1101 00:43:19.480917 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.481587 kubelet[2748]: E1101 00:43:19.481111 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.481587 kubelet[2748]: W1101 00:43:19.481129 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.481587 kubelet[2748]: E1101 00:43:19.481141 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.481587 kubelet[2748]: E1101 00:43:19.481349 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.482120 kubelet[2748]: W1101 00:43:19.481359 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.482120 kubelet[2748]: E1101 00:43:19.481370 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.482120 kubelet[2748]: E1101 00:43:19.481683 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.482120 kubelet[2748]: W1101 00:43:19.481694 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.482120 kubelet[2748]: E1101 00:43:19.481707 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.482120 kubelet[2748]: E1101 00:43:19.481949 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.482120 kubelet[2748]: W1101 00:43:19.481959 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.482120 kubelet[2748]: E1101 00:43:19.481971 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.482562 kubelet[2748]: E1101 00:43:19.482159 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.482562 kubelet[2748]: W1101 00:43:19.482168 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.482562 kubelet[2748]: E1101 00:43:19.482179 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.482562 kubelet[2748]: E1101 00:43:19.482499 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.482562 kubelet[2748]: W1101 00:43:19.482510 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.482562 kubelet[2748]: E1101 00:43:19.482523 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.483090 kubelet[2748]: E1101 00:43:19.482941 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.483090 kubelet[2748]: W1101 00:43:19.482953 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.483090 kubelet[2748]: E1101 00:43:19.482971 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.486184 kubelet[2748]: E1101 00:43:19.483253 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.486184 kubelet[2748]: W1101 00:43:19.483265 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.486184 kubelet[2748]: E1101 00:43:19.483282 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.486184 kubelet[2748]: E1101 00:43:19.483540 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.486184 kubelet[2748]: W1101 00:43:19.483550 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.486184 kubelet[2748]: E1101 00:43:19.483567 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.486184 kubelet[2748]: E1101 00:43:19.483800 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.486184 kubelet[2748]: W1101 00:43:19.483810 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.486184 kubelet[2748]: E1101 00:43:19.483826 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.486184 kubelet[2748]: E1101 00:43:19.484036 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.486758 kubelet[2748]: W1101 00:43:19.484056 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.486758 kubelet[2748]: E1101 00:43:19.484164 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.486758 kubelet[2748]: E1101 00:43:19.484307 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.486758 kubelet[2748]: W1101 00:43:19.484317 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.486758 kubelet[2748]: E1101 00:43:19.484459 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.486758 kubelet[2748]: E1101 00:43:19.484796 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.486758 kubelet[2748]: W1101 00:43:19.484808 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.486758 kubelet[2748]: E1101 00:43:19.484957 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.493071 kubelet[2748]: E1101 00:43:19.493029 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.493071 kubelet[2748]: W1101 00:43:19.493062 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.493301 kubelet[2748]: E1101 00:43:19.493090 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.493786 kubelet[2748]: E1101 00:43:19.493763 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.493786 kubelet[2748]: W1101 00:43:19.493786 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.493929 kubelet[2748]: E1101 00:43:19.493806 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.495709 kubelet[2748]: E1101 00:43:19.495685 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.495709 kubelet[2748]: W1101 00:43:19.495708 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.495903 kubelet[2748]: E1101 00:43:19.495730 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.496001 kubelet[2748]: E1101 00:43:19.495988 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.496069 kubelet[2748]: W1101 00:43:19.496004 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.496069 kubelet[2748]: E1101 00:43:19.496019 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.496278 kubelet[2748]: E1101 00:43:19.496264 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.496380 kubelet[2748]: W1101 00:43:19.496280 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.496380 kubelet[2748]: E1101 00:43:19.496294 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.496834 kubelet[2748]: E1101 00:43:19.496817 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.496834 kubelet[2748]: W1101 00:43:19.496834 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.496972 kubelet[2748]: E1101 00:43:19.496849 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.497096 kubelet[2748]: E1101 00:43:19.497082 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.497176 kubelet[2748]: W1101 00:43:19.497097 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.497176 kubelet[2748]: E1101 00:43:19.497111 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.497427 kubelet[2748]: E1101 00:43:19.497402 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.497427 kubelet[2748]: W1101 00:43:19.497418 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.497545 kubelet[2748]: E1101 00:43:19.497452 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.499632 kubelet[2748]: E1101 00:43:19.499607 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.499632 kubelet[2748]: W1101 00:43:19.499631 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.501687 kubelet[2748]: E1101 00:43:19.501655 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.504461 kubelet[2748]: E1101 00:43:19.504402 2748 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:43:19.504461 kubelet[2748]: W1101 00:43:19.504423 2748 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:43:19.504756 kubelet[2748]: E1101 00:43:19.504482 2748 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:43:19.861899 env[1822]: time="2025-11-01T00:43:19.861858780Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:19.864722 env[1822]: time="2025-11-01T00:43:19.864681875Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:19.866569 env[1822]: time="2025-11-01T00:43:19.866525245Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:19.868390 env[1822]: time="2025-11-01T00:43:19.868322131Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:19.868826 env[1822]: time="2025-11-01T00:43:19.868788737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 00:43:19.871863 env[1822]: time="2025-11-01T00:43:19.871682881Z" level=info msg="CreateContainer within sandbox \"01ab290d7b9394ba0e335b3acbdacc6726e2821a497f1f08af358682750df2a6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:43:19.890744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1404747904.mount: Deactivated successfully. Nov 1 00:43:19.897441 env[1822]: time="2025-11-01T00:43:19.897366892Z" level=info msg="CreateContainer within sandbox \"01ab290d7b9394ba0e335b3acbdacc6726e2821a497f1f08af358682750df2a6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a21eb5ca5c95f7bdfd3c3301b2f84a3635f7b4354d83c3f93f58934bf562a801\"" Nov 1 00:43:19.901035 env[1822]: time="2025-11-01T00:43:19.898897679Z" level=info msg="StartContainer for \"a21eb5ca5c95f7bdfd3c3301b2f84a3635f7b4354d83c3f93f58934bf562a801\"" Nov 1 00:43:19.997449 env[1822]: time="2025-11-01T00:43:19.997396676Z" level=info msg="StartContainer for \"a21eb5ca5c95f7bdfd3c3301b2f84a3635f7b4354d83c3f93f58934bf562a801\" returns successfully" Nov 1 00:43:20.314184 env[1822]: time="2025-11-01T00:43:20.314125068Z" level=info msg="shim disconnected" id=a21eb5ca5c95f7bdfd3c3301b2f84a3635f7b4354d83c3f93f58934bf562a801 Nov 1 00:43:20.314184 env[1822]: time="2025-11-01T00:43:20.314182843Z" level=warning msg="cleaning up after shim disconnected" id=a21eb5ca5c95f7bdfd3c3301b2f84a3635f7b4354d83c3f93f58934bf562a801 namespace=k8s.io Nov 1 00:43:20.314184 env[1822]: time="2025-11-01T00:43:20.314195600Z" level=info msg="cleaning up dead shim" Nov 1 00:43:20.323753 env[1822]: time="2025-11-01T00:43:20.323705161Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:43:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3514 runtime=io.containerd.runc.v2\n" Nov 1 00:43:20.350044 kubelet[2748]: E1101 00:43:20.348675 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5lqpx" podUID="9a6bbdac-9f73-4cc6-aadc-84424d8082ea" Nov 1 00:43:20.452127 kubelet[2748]: I1101 00:43:20.452097 2748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:43:20.454156 env[1822]: time="2025-11-01T00:43:20.454114949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:43:20.473707 kubelet[2748]: I1101 00:43:20.473641 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5cdf694c78-whl26" podStartSLOduration=2.279205535 podStartE2EDuration="4.473621766s" podCreationTimestamp="2025-11-01 00:43:16 +0000 UTC" firstStartedPulling="2025-11-01 00:43:16.593224874 +0000 UTC m=+23.638997550" lastFinishedPulling="2025-11-01 00:43:18.787641063 +0000 UTC m=+25.833413781" observedRunningTime="2025-11-01 00:43:19.492646348 +0000 UTC m=+26.538419031" watchObservedRunningTime="2025-11-01 00:43:20.473621766 +0000 UTC m=+27.519394456" Nov 1 00:43:20.798520 systemd[1]: run-containerd-runc-k8s.io-a21eb5ca5c95f7bdfd3c3301b2f84a3635f7b4354d83c3f93f58934bf562a801-runc.H97I0y.mount: Deactivated successfully. Nov 1 00:43:20.799442 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a21eb5ca5c95f7bdfd3c3301b2f84a3635f7b4354d83c3f93f58934bf562a801-rootfs.mount: Deactivated successfully. Nov 1 00:43:22.348815 kubelet[2748]: E1101 00:43:22.348761 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5lqpx" podUID="9a6bbdac-9f73-4cc6-aadc-84424d8082ea" Nov 1 00:43:23.640142 env[1822]: time="2025-11-01T00:43:23.640073057Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:23.642238 env[1822]: time="2025-11-01T00:43:23.642197774Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:23.644189 env[1822]: time="2025-11-01T00:43:23.644142475Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:23.645805 env[1822]: time="2025-11-01T00:43:23.645755915Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:23.646445 env[1822]: time="2025-11-01T00:43:23.646405630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 00:43:23.649697 env[1822]: time="2025-11-01T00:43:23.649302627Z" level=info msg="CreateContainer within sandbox \"01ab290d7b9394ba0e335b3acbdacc6726e2821a497f1f08af358682750df2a6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:43:23.668209 env[1822]: time="2025-11-01T00:43:23.668137749Z" level=info msg="CreateContainer within sandbox \"01ab290d7b9394ba0e335b3acbdacc6726e2821a497f1f08af358682750df2a6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2b4e101e2855e6d0a48d56ae3ef00c99fe0d419d1b68d71c66428d53e8367807\"" Nov 1 00:43:23.670073 env[1822]: time="2025-11-01T00:43:23.668813037Z" level=info msg="StartContainer for \"2b4e101e2855e6d0a48d56ae3ef00c99fe0d419d1b68d71c66428d53e8367807\"" Nov 1 00:43:23.776896 env[1822]: time="2025-11-01T00:43:23.776846304Z" level=info msg="StartContainer for \"2b4e101e2855e6d0a48d56ae3ef00c99fe0d419d1b68d71c66428d53e8367807\" returns successfully" Nov 1 00:43:24.348239 kubelet[2748]: E1101 00:43:24.348179 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5lqpx" podUID="9a6bbdac-9f73-4cc6-aadc-84424d8082ea" Nov 1 00:43:24.824042 env[1822]: time="2025-11-01T00:43:24.823897082Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:43:24.850768 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b4e101e2855e6d0a48d56ae3ef00c99fe0d419d1b68d71c66428d53e8367807-rootfs.mount: Deactivated successfully. Nov 1 00:43:24.858541 env[1822]: time="2025-11-01T00:43:24.857662654Z" level=info msg="shim disconnected" id=2b4e101e2855e6d0a48d56ae3ef00c99fe0d419d1b68d71c66428d53e8367807 Nov 1 00:43:24.858541 env[1822]: time="2025-11-01T00:43:24.857703800Z" level=warning msg="cleaning up after shim disconnected" id=2b4e101e2855e6d0a48d56ae3ef00c99fe0d419d1b68d71c66428d53e8367807 namespace=k8s.io Nov 1 00:43:24.858541 env[1822]: time="2025-11-01T00:43:24.857713193Z" level=info msg="cleaning up dead shim" Nov 1 00:43:24.866830 env[1822]: time="2025-11-01T00:43:24.866770566Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:43:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3586 runtime=io.containerd.runc.v2\n" Nov 1 00:43:24.917579 kubelet[2748]: I1101 00:43:24.917549 2748 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:43:25.137903 kubelet[2748]: I1101 00:43:25.137713 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjd2g\" (UniqueName: \"kubernetes.io/projected/472f8f92-5499-46b7-8902-95424bad4337-kube-api-access-vjd2g\") pod \"calico-kube-controllers-6db4456f5f-n6pzz\" (UID: \"472f8f92-5499-46b7-8902-95424bad4337\") " pod="calico-system/calico-kube-controllers-6db4456f5f-n6pzz" Nov 1 00:43:25.137903 kubelet[2748]: I1101 00:43:25.137756 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cd31c79-d021-4671-b1b1-16d458644a79-config\") pod \"goldmane-666569f655-dn6kn\" (UID: \"6cd31c79-d021-4671-b1b1-16d458644a79\") " pod="calico-system/goldmane-666569f655-dn6kn" Nov 1 00:43:25.137903 kubelet[2748]: I1101 00:43:25.137779 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6d3c3149-9beb-44f8-a7ee-d6982872dcbb-calico-apiserver-certs\") pod \"calico-apiserver-68cc86985f-qb2p9\" (UID: \"6d3c3149-9beb-44f8-a7ee-d6982872dcbb\") " pod="calico-apiserver/calico-apiserver-68cc86985f-qb2p9" Nov 1 00:43:25.137903 kubelet[2748]: I1101 00:43:25.137804 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6cd31c79-d021-4671-b1b1-16d458644a79-goldmane-key-pair\") pod \"goldmane-666569f655-dn6kn\" (UID: \"6cd31c79-d021-4671-b1b1-16d458644a79\") " pod="calico-system/goldmane-666569f655-dn6kn" Nov 1 00:43:25.137903 kubelet[2748]: I1101 00:43:25.137820 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fa378298-f74d-4607-95be-88748265623e-whisker-backend-key-pair\") pod \"whisker-7f96c4696b-wc2sq\" (UID: \"fa378298-f74d-4607-95be-88748265623e\") " pod="calico-system/whisker-7f96c4696b-wc2sq" Nov 1 00:43:25.138168 kubelet[2748]: I1101 00:43:25.137837 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df0a1bba-62a6-45de-8540-a37de4852942-config-volume\") pod \"coredns-668d6bf9bc-hj6zc\" (UID: \"df0a1bba-62a6-45de-8540-a37de4852942\") " pod="kube-system/coredns-668d6bf9bc-hj6zc" Nov 1 00:43:25.138168 kubelet[2748]: I1101 00:43:25.137856 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ghsd\" (UniqueName: \"kubernetes.io/projected/6d3c3149-9beb-44f8-a7ee-d6982872dcbb-kube-api-access-6ghsd\") pod \"calico-apiserver-68cc86985f-qb2p9\" (UID: \"6d3c3149-9beb-44f8-a7ee-d6982872dcbb\") " pod="calico-apiserver/calico-apiserver-68cc86985f-qb2p9" Nov 1 00:43:25.138168 kubelet[2748]: I1101 00:43:25.137875 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cd31c79-d021-4671-b1b1-16d458644a79-goldmane-ca-bundle\") pod \"goldmane-666569f655-dn6kn\" (UID: \"6cd31c79-d021-4671-b1b1-16d458644a79\") " pod="calico-system/goldmane-666569f655-dn6kn" Nov 1 00:43:25.138168 kubelet[2748]: I1101 00:43:25.137897 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d1a8da9-b760-4438-8153-c39262d29176-config-volume\") pod \"coredns-668d6bf9bc-k7np4\" (UID: \"9d1a8da9-b760-4438-8153-c39262d29176\") " pod="kube-system/coredns-668d6bf9bc-k7np4" Nov 1 00:43:25.138168 kubelet[2748]: I1101 00:43:25.137914 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/472f8f92-5499-46b7-8902-95424bad4337-tigera-ca-bundle\") pod \"calico-kube-controllers-6db4456f5f-n6pzz\" (UID: \"472f8f92-5499-46b7-8902-95424bad4337\") " pod="calico-system/calico-kube-controllers-6db4456f5f-n6pzz" Nov 1 00:43:25.138303 kubelet[2748]: I1101 00:43:25.137939 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwgg4\" (UniqueName: \"kubernetes.io/projected/9d1a8da9-b760-4438-8153-c39262d29176-kube-api-access-jwgg4\") pod \"coredns-668d6bf9bc-k7np4\" (UID: \"9d1a8da9-b760-4438-8153-c39262d29176\") " pod="kube-system/coredns-668d6bf9bc-k7np4" Nov 1 00:43:25.138303 kubelet[2748]: I1101 00:43:25.137955 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/94ef4085-6c01-4795-a191-98e0030c89bd-calico-apiserver-certs\") pod \"calico-apiserver-68cc86985f-ffnk6\" (UID: \"94ef4085-6c01-4795-a191-98e0030c89bd\") " pod="calico-apiserver/calico-apiserver-68cc86985f-ffnk6" Nov 1 00:43:25.138303 kubelet[2748]: I1101 00:43:25.137980 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st25d\" (UniqueName: \"kubernetes.io/projected/94ef4085-6c01-4795-a191-98e0030c89bd-kube-api-access-st25d\") pod \"calico-apiserver-68cc86985f-ffnk6\" (UID: \"94ef4085-6c01-4795-a191-98e0030c89bd\") " pod="calico-apiserver/calico-apiserver-68cc86985f-ffnk6" Nov 1 00:43:25.138303 kubelet[2748]: I1101 00:43:25.137999 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa378298-f74d-4607-95be-88748265623e-whisker-ca-bundle\") pod \"whisker-7f96c4696b-wc2sq\" (UID: \"fa378298-f74d-4607-95be-88748265623e\") " pod="calico-system/whisker-7f96c4696b-wc2sq" Nov 1 00:43:25.138303 kubelet[2748]: I1101 00:43:25.138014 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wplvg\" (UniqueName: \"kubernetes.io/projected/6cd31c79-d021-4671-b1b1-16d458644a79-kube-api-access-wplvg\") pod \"goldmane-666569f655-dn6kn\" (UID: \"6cd31c79-d021-4671-b1b1-16d458644a79\") " pod="calico-system/goldmane-666569f655-dn6kn" Nov 1 00:43:25.138466 kubelet[2748]: I1101 00:43:25.138029 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zwgw\" (UniqueName: \"kubernetes.io/projected/fa378298-f74d-4607-95be-88748265623e-kube-api-access-6zwgw\") pod \"whisker-7f96c4696b-wc2sq\" (UID: \"fa378298-f74d-4607-95be-88748265623e\") " pod="calico-system/whisker-7f96c4696b-wc2sq" Nov 1 00:43:25.138466 kubelet[2748]: I1101 00:43:25.138043 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fp69\" (UniqueName: \"kubernetes.io/projected/df0a1bba-62a6-45de-8540-a37de4852942-kube-api-access-8fp69\") pod \"coredns-668d6bf9bc-hj6zc\" (UID: \"df0a1bba-62a6-45de-8540-a37de4852942\") " pod="kube-system/coredns-668d6bf9bc-hj6zc" Nov 1 00:43:25.300849 env[1822]: time="2025-11-01T00:43:25.300398641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k7np4,Uid:9d1a8da9-b760-4438-8153-c39262d29176,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:25.312041 env[1822]: time="2025-11-01T00:43:25.311671091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dn6kn,Uid:6cd31c79-d021-4671-b1b1-16d458644a79,Namespace:calico-system,Attempt:0,}" Nov 1 00:43:25.312496 env[1822]: time="2025-11-01T00:43:25.312462932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cc86985f-ffnk6,Uid:94ef4085-6c01-4795-a191-98e0030c89bd,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:43:25.316169 env[1822]: time="2025-11-01T00:43:25.316117777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cc86985f-qb2p9,Uid:6d3c3149-9beb-44f8-a7ee-d6982872dcbb,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:43:25.337420 env[1822]: time="2025-11-01T00:43:25.337384157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6db4456f5f-n6pzz,Uid:472f8f92-5499-46b7-8902-95424bad4337,Namespace:calico-system,Attempt:0,}" Nov 1 00:43:25.339331 env[1822]: time="2025-11-01T00:43:25.339296238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hj6zc,Uid:df0a1bba-62a6-45de-8540-a37de4852942,Namespace:kube-system,Attempt:0,}" Nov 1 00:43:25.339852 env[1822]: time="2025-11-01T00:43:25.339826831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f96c4696b-wc2sq,Uid:fa378298-f74d-4607-95be-88748265623e,Namespace:calico-system,Attempt:0,}" Nov 1 00:43:25.472083 env[1822]: time="2025-11-01T00:43:25.471961957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:43:25.938524 kubelet[2748]: I1101 00:43:25.931439 2748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:43:26.019000 audit[3718]: NETFILTER_CFG table=filter:103 family=2 entries=21 op=nft_register_rule pid=3718 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:26.024283 kernel: kauditd_printk_skb: 8 callbacks suppressed Nov 1 00:43:26.024557 kernel: audit: type=1325 audit(1761957806.019:296): table=filter:103 family=2 entries=21 op=nft_register_rule pid=3718 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:26.038723 kernel: audit: type=1300 audit(1761957806.019:296): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcace03270 a2=0 a3=7ffcace0325c items=0 ppid=2891 pid=3718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:26.019000 audit[3718]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcace03270 a2=0 a3=7ffcace0325c items=0 ppid=2891 pid=3718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:26.045732 kernel: audit: type=1327 audit(1761957806.019:296): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:26.019000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:26.058771 kernel: audit: type=1325 audit(1761957806.041:297): table=nat:104 family=2 entries=19 op=nft_register_chain pid=3718 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:26.041000 audit[3718]: NETFILTER_CFG table=nat:104 family=2 entries=19 op=nft_register_chain pid=3718 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:26.041000 audit[3718]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffcace03270 a2=0 a3=7ffcace0325c items=0 ppid=2891 pid=3718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:26.077562 kernel: audit: type=1300 audit(1761957806.041:297): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffcace03270 a2=0 a3=7ffcace0325c items=0 ppid=2891 pid=3718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:26.077709 kernel: audit: type=1327 audit(1761957806.041:297): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:26.041000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:26.203205 env[1822]: time="2025-11-01T00:43:26.198564812Z" level=error msg="Failed to destroy network for sandbox \"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.202527 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c-shm.mount: Deactivated successfully. Nov 1 00:43:26.204845 env[1822]: time="2025-11-01T00:43:26.204793655Z" level=error msg="encountered an error cleaning up failed sandbox \"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.204957 env[1822]: time="2025-11-01T00:43:26.204870905Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cc86985f-qb2p9,Uid:6d3c3149-9beb-44f8-a7ee-d6982872dcbb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.207290 kubelet[2748]: E1101 00:43:26.205121 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.207451 kubelet[2748]: E1101 00:43:26.207357 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68cc86985f-qb2p9" Nov 1 00:43:26.207451 kubelet[2748]: E1101 00:43:26.207392 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68cc86985f-qb2p9" Nov 1 00:43:26.207551 kubelet[2748]: E1101 00:43:26.207455 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68cc86985f-qb2p9_calico-apiserver(6d3c3149-9beb-44f8-a7ee-d6982872dcbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68cc86985f-qb2p9_calico-apiserver(6d3c3149-9beb-44f8-a7ee-d6982872dcbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68cc86985f-qb2p9" podUID="6d3c3149-9beb-44f8-a7ee-d6982872dcbb" Nov 1 00:43:26.225623 env[1822]: time="2025-11-01T00:43:26.225562309Z" level=error msg="Failed to destroy network for sandbox \"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.231217 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1-shm.mount: Deactivated successfully. Nov 1 00:43:26.232756 env[1822]: time="2025-11-01T00:43:26.232690345Z" level=error msg="encountered an error cleaning up failed sandbox \"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.232895 env[1822]: time="2025-11-01T00:43:26.232769727Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k7np4,Uid:9d1a8da9-b760-4438-8153-c39262d29176,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.233060 kubelet[2748]: E1101 00:43:26.233006 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.233158 kubelet[2748]: E1101 00:43:26.233071 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k7np4" Nov 1 00:43:26.233158 kubelet[2748]: E1101 00:43:26.233099 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-k7np4" Nov 1 00:43:26.233260 kubelet[2748]: E1101 00:43:26.233150 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k7np4_kube-system(9d1a8da9-b760-4438-8153-c39262d29176)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k7np4_kube-system(9d1a8da9-b760-4438-8153-c39262d29176)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k7np4" podUID="9d1a8da9-b760-4438-8153-c39262d29176" Nov 1 00:43:26.244621 env[1822]: time="2025-11-01T00:43:26.244554070Z" level=error msg="Failed to destroy network for sandbox \"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.251925 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0-shm.mount: Deactivated successfully. Nov 1 00:43:26.254069 env[1822]: time="2025-11-01T00:43:26.254007893Z" level=error msg="Failed to destroy network for sandbox \"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.260590 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44-shm.mount: Deactivated successfully. Nov 1 00:43:26.262798 env[1822]: time="2025-11-01T00:43:26.262733784Z" level=error msg="encountered an error cleaning up failed sandbox \"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.263449 env[1822]: time="2025-11-01T00:43:26.263399178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dn6kn,Uid:6cd31c79-d021-4671-b1b1-16d458644a79,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.264389 env[1822]: time="2025-11-01T00:43:26.263188089Z" level=error msg="encountered an error cleaning up failed sandbox \"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.265072 kubelet[2748]: E1101 00:43:26.264657 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.265072 kubelet[2748]: E1101 00:43:26.264722 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dn6kn" Nov 1 00:43:26.265072 kubelet[2748]: E1101 00:43:26.264752 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dn6kn" Nov 1 00:43:26.265264 kubelet[2748]: E1101 00:43:26.264801 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-dn6kn_calico-system(6cd31c79-d021-4671-b1b1-16d458644a79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-dn6kn_calico-system(6cd31c79-d021-4671-b1b1-16d458644a79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-dn6kn" podUID="6cd31c79-d021-4671-b1b1-16d458644a79" Nov 1 00:43:26.265533 env[1822]: time="2025-11-01T00:43:26.265485253Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f96c4696b-wc2sq,Uid:fa378298-f74d-4607-95be-88748265623e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.267712 kubelet[2748]: E1101 00:43:26.267505 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.267712 kubelet[2748]: E1101 00:43:26.267569 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f96c4696b-wc2sq" Nov 1 00:43:26.267712 kubelet[2748]: E1101 00:43:26.267597 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f96c4696b-wc2sq" Nov 1 00:43:26.267915 kubelet[2748]: E1101 00:43:26.267643 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7f96c4696b-wc2sq_calico-system(fa378298-f74d-4607-95be-88748265623e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7f96c4696b-wc2sq_calico-system(fa378298-f74d-4607-95be-88748265623e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f96c4696b-wc2sq" podUID="fa378298-f74d-4607-95be-88748265623e" Nov 1 00:43:26.271956 env[1822]: time="2025-11-01T00:43:26.271897419Z" level=error msg="Failed to destroy network for sandbox \"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.272463 env[1822]: time="2025-11-01T00:43:26.272418348Z" level=error msg="encountered an error cleaning up failed sandbox \"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.272570 env[1822]: time="2025-11-01T00:43:26.272481177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hj6zc,Uid:df0a1bba-62a6-45de-8540-a37de4852942,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.272732 kubelet[2748]: E1101 00:43:26.272693 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.272822 kubelet[2748]: E1101 00:43:26.272758 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hj6zc" Nov 1 00:43:26.272822 kubelet[2748]: E1101 00:43:26.272785 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hj6zc" Nov 1 00:43:26.272915 kubelet[2748]: E1101 00:43:26.272844 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hj6zc_kube-system(df0a1bba-62a6-45de-8540-a37de4852942)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hj6zc_kube-system(df0a1bba-62a6-45de-8540-a37de4852942)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hj6zc" podUID="df0a1bba-62a6-45de-8540-a37de4852942" Nov 1 00:43:26.279419 env[1822]: time="2025-11-01T00:43:26.279359222Z" level=error msg="Failed to destroy network for sandbox \"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.279843 env[1822]: time="2025-11-01T00:43:26.279773780Z" level=error msg="encountered an error cleaning up failed sandbox \"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.279977 env[1822]: time="2025-11-01T00:43:26.279846328Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cc86985f-ffnk6,Uid:94ef4085-6c01-4795-a191-98e0030c89bd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.285645 kubelet[2748]: E1101 00:43:26.280055 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.285645 kubelet[2748]: E1101 00:43:26.280113 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68cc86985f-ffnk6" Nov 1 00:43:26.285645 kubelet[2748]: E1101 00:43:26.280140 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-68cc86985f-ffnk6" Nov 1 00:43:26.286250 kubelet[2748]: E1101 00:43:26.280191 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68cc86985f-ffnk6_calico-apiserver(94ef4085-6c01-4795-a191-98e0030c89bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68cc86985f-ffnk6_calico-apiserver(94ef4085-6c01-4795-a191-98e0030c89bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68cc86985f-ffnk6" podUID="94ef4085-6c01-4795-a191-98e0030c89bd" Nov 1 00:43:26.286527 env[1822]: time="2025-11-01T00:43:26.286473108Z" level=error msg="Failed to destroy network for sandbox \"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.287801 env[1822]: time="2025-11-01T00:43:26.287751876Z" level=error msg="encountered an error cleaning up failed sandbox \"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.288015 env[1822]: time="2025-11-01T00:43:26.287979721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6db4456f5f-n6pzz,Uid:472f8f92-5499-46b7-8902-95424bad4337,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.292950 kubelet[2748]: E1101 00:43:26.291499 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.292950 kubelet[2748]: E1101 00:43:26.291574 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6db4456f5f-n6pzz" Nov 1 00:43:26.292950 kubelet[2748]: E1101 00:43:26.291618 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6db4456f5f-n6pzz" Nov 1 00:43:26.293218 kubelet[2748]: E1101 00:43:26.291681 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6db4456f5f-n6pzz_calico-system(472f8f92-5499-46b7-8902-95424bad4337)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6db4456f5f-n6pzz_calico-system(472f8f92-5499-46b7-8902-95424bad4337)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6db4456f5f-n6pzz" podUID="472f8f92-5499-46b7-8902-95424bad4337" Nov 1 00:43:26.351167 env[1822]: time="2025-11-01T00:43:26.351128637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5lqpx,Uid:9a6bbdac-9f73-4cc6-aadc-84424d8082ea,Namespace:calico-system,Attempt:0,}" Nov 1 00:43:26.423511 env[1822]: time="2025-11-01T00:43:26.423444968Z" level=error msg="Failed to destroy network for sandbox \"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.423908 env[1822]: time="2025-11-01T00:43:26.423857198Z" level=error msg="encountered an error cleaning up failed sandbox \"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.424037 env[1822]: time="2025-11-01T00:43:26.423930258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5lqpx,Uid:9a6bbdac-9f73-4cc6-aadc-84424d8082ea,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.424308 kubelet[2748]: E1101 00:43:26.424243 2748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.424466 kubelet[2748]: E1101 00:43:26.424397 2748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5lqpx" Nov 1 00:43:26.424598 kubelet[2748]: E1101 00:43:26.424500 2748 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5lqpx" Nov 1 00:43:26.424982 kubelet[2748]: E1101 00:43:26.424600 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5lqpx_calico-system(9a6bbdac-9f73-4cc6-aadc-84424d8082ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5lqpx_calico-system(9a6bbdac-9f73-4cc6-aadc-84424d8082ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5lqpx" podUID="9a6bbdac-9f73-4cc6-aadc-84424d8082ea" Nov 1 00:43:26.487672 kubelet[2748]: I1101 00:43:26.486041 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" Nov 1 00:43:26.494317 kubelet[2748]: I1101 00:43:26.493134 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" Nov 1 00:43:26.501233 kubelet[2748]: I1101 00:43:26.501014 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" Nov 1 00:43:26.503386 kubelet[2748]: I1101 00:43:26.502905 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" Nov 1 00:43:26.505927 env[1822]: time="2025-11-01T00:43:26.504785942Z" level=info msg="StopPodSandbox for \"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44\"" Nov 1 00:43:26.506240 env[1822]: time="2025-11-01T00:43:26.506205073Z" level=info msg="StopPodSandbox for \"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92\"" Nov 1 00:43:26.515241 env[1822]: time="2025-11-01T00:43:26.515170534Z" level=info msg="StopPodSandbox for \"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853\"" Nov 1 00:43:26.518471 env[1822]: time="2025-11-01T00:43:26.518427948Z" level=info msg="StopPodSandbox for \"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6\"" Nov 1 00:43:26.522411 kubelet[2748]: I1101 00:43:26.522163 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" Nov 1 00:43:26.523353 env[1822]: time="2025-11-01T00:43:26.523288052Z" level=info msg="StopPodSandbox for \"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c\"" Nov 1 00:43:26.546517 kubelet[2748]: I1101 00:43:26.545087 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" Nov 1 00:43:26.547967 env[1822]: time="2025-11-01T00:43:26.547924513Z" level=info msg="StopPodSandbox for \"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1\"" Nov 1 00:43:26.626191 kubelet[2748]: I1101 00:43:26.626162 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" Nov 1 00:43:26.629586 env[1822]: time="2025-11-01T00:43:26.629544503Z" level=info msg="StopPodSandbox for \"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0\"" Nov 1 00:43:26.644070 env[1822]: time="2025-11-01T00:43:26.643950499Z" level=error msg="StopPodSandbox for \"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853\" failed" error="failed to destroy network for sandbox \"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.661920 kubelet[2748]: E1101 00:43:26.661857 2748 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" Nov 1 00:43:26.662089 kubelet[2748]: E1101 00:43:26.661968 2748 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853"} Nov 1 00:43:26.662089 kubelet[2748]: E1101 00:43:26.662068 2748 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"472f8f92-5499-46b7-8902-95424bad4337\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:43:26.662260 kubelet[2748]: E1101 00:43:26.662102 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"472f8f92-5499-46b7-8902-95424bad4337\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6db4456f5f-n6pzz" podUID="472f8f92-5499-46b7-8902-95424bad4337" Nov 1 00:43:26.690829 env[1822]: time="2025-11-01T00:43:26.690763482Z" level=error msg="StopPodSandbox for \"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92\" failed" error="failed to destroy network for sandbox \"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.691051 kubelet[2748]: E1101 00:43:26.691010 2748 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" Nov 1 00:43:26.691147 kubelet[2748]: E1101 00:43:26.691073 2748 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92"} Nov 1 00:43:26.691147 kubelet[2748]: E1101 00:43:26.691118 2748 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"94ef4085-6c01-4795-a191-98e0030c89bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:43:26.691284 kubelet[2748]: E1101 00:43:26.691152 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"94ef4085-6c01-4795-a191-98e0030c89bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68cc86985f-ffnk6" podUID="94ef4085-6c01-4795-a191-98e0030c89bd" Nov 1 00:43:26.691810 kubelet[2748]: I1101 00:43:26.691785 2748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" Nov 1 00:43:26.694789 env[1822]: time="2025-11-01T00:43:26.694715092Z" level=info msg="StopPodSandbox for \"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20\"" Nov 1 00:43:26.710312 env[1822]: time="2025-11-01T00:43:26.710245576Z" level=error msg="StopPodSandbox for \"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c\" failed" error="failed to destroy network for sandbox \"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.713364 kubelet[2748]: E1101 00:43:26.713299 2748 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" Nov 1 00:43:26.713607 kubelet[2748]: E1101 00:43:26.713381 2748 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c"} Nov 1 00:43:26.713607 kubelet[2748]: E1101 00:43:26.713426 2748 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6d3c3149-9beb-44f8-a7ee-d6982872dcbb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:43:26.713607 kubelet[2748]: E1101 00:43:26.713461 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6d3c3149-9beb-44f8-a7ee-d6982872dcbb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-68cc86985f-qb2p9" podUID="6d3c3149-9beb-44f8-a7ee-d6982872dcbb" Nov 1 00:43:26.730985 env[1822]: time="2025-11-01T00:43:26.730917085Z" level=error msg="StopPodSandbox for \"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44\" failed" error="failed to destroy network for sandbox \"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.731213 kubelet[2748]: E1101 00:43:26.731173 2748 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" Nov 1 00:43:26.731310 kubelet[2748]: E1101 00:43:26.731233 2748 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44"} Nov 1 00:43:26.731310 kubelet[2748]: E1101 00:43:26.731282 2748 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fa378298-f74d-4607-95be-88748265623e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:43:26.731478 kubelet[2748]: E1101 00:43:26.731312 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fa378298-f74d-4607-95be-88748265623e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f96c4696b-wc2sq" podUID="fa378298-f74d-4607-95be-88748265623e" Nov 1 00:43:26.773644 env[1822]: time="2025-11-01T00:43:26.771923282Z" level=error msg="StopPodSandbox for \"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6\" failed" error="failed to destroy network for sandbox \"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.774163 kubelet[2748]: E1101 00:43:26.774123 2748 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" Nov 1 00:43:26.774279 kubelet[2748]: E1101 00:43:26.774183 2748 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6"} Nov 1 00:43:26.774279 kubelet[2748]: E1101 00:43:26.774224 2748 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"df0a1bba-62a6-45de-8540-a37de4852942\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:43:26.774279 kubelet[2748]: E1101 00:43:26.774258 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"df0a1bba-62a6-45de-8540-a37de4852942\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hj6zc" podUID="df0a1bba-62a6-45de-8540-a37de4852942" Nov 1 00:43:26.809755 env[1822]: time="2025-11-01T00:43:26.809656892Z" level=error msg="StopPodSandbox for \"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1\" failed" error="failed to destroy network for sandbox \"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.810701 kubelet[2748]: E1101 00:43:26.810654 2748 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" Nov 1 00:43:26.810701 kubelet[2748]: E1101 00:43:26.810720 2748 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1"} Nov 1 00:43:26.811007 kubelet[2748]: E1101 00:43:26.810771 2748 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9d1a8da9-b760-4438-8153-c39262d29176\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:43:26.811007 kubelet[2748]: E1101 00:43:26.810802 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9d1a8da9-b760-4438-8153-c39262d29176\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-k7np4" podUID="9d1a8da9-b760-4438-8153-c39262d29176" Nov 1 00:43:26.846037 env[1822]: time="2025-11-01T00:43:26.845976836Z" level=error msg="StopPodSandbox for \"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20\" failed" error="failed to destroy network for sandbox \"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.846471 kubelet[2748]: E1101 00:43:26.846421 2748 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" Nov 1 00:43:26.846589 kubelet[2748]: E1101 00:43:26.846488 2748 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20"} Nov 1 00:43:26.846589 kubelet[2748]: E1101 00:43:26.846531 2748 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a6bbdac-9f73-4cc6-aadc-84424d8082ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:43:26.846589 kubelet[2748]: E1101 00:43:26.846561 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a6bbdac-9f73-4cc6-aadc-84424d8082ea\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5lqpx" podUID="9a6bbdac-9f73-4cc6-aadc-84424d8082ea" Nov 1 00:43:26.851973 env[1822]: time="2025-11-01T00:43:26.851905245Z" level=error msg="StopPodSandbox for \"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0\" failed" error="failed to destroy network for sandbox \"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:43:26.852629 kubelet[2748]: E1101 00:43:26.852395 2748 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" Nov 1 00:43:26.852629 kubelet[2748]: E1101 00:43:26.852469 2748 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0"} Nov 1 00:43:26.852629 kubelet[2748]: E1101 00:43:26.852529 2748 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6cd31c79-d021-4671-b1b1-16d458644a79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:43:26.852629 kubelet[2748]: E1101 00:43:26.852567 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6cd31c79-d021-4671-b1b1-16d458644a79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-dn6kn" podUID="6cd31c79-d021-4671-b1b1-16d458644a79" Nov 1 00:43:26.853409 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6-shm.mount: Deactivated successfully. Nov 1 00:43:26.853598 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853-shm.mount: Deactivated successfully. Nov 1 00:43:26.853753 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92-shm.mount: Deactivated successfully. Nov 1 00:43:32.209197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount791899432.mount: Deactivated successfully. Nov 1 00:43:32.508182 env[1822]: time="2025-11-01T00:43:32.508040291Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:32.567030 env[1822]: time="2025-11-01T00:43:32.566990841Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:32.585415 env[1822]: time="2025-11-01T00:43:32.585355787Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:32.613788 env[1822]: time="2025-11-01T00:43:32.613735968Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:43:32.614882 env[1822]: time="2025-11-01T00:43:32.614831083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 00:43:32.656247 env[1822]: time="2025-11-01T00:43:32.656196185Z" level=info msg="CreateContainer within sandbox \"01ab290d7b9394ba0e335b3acbdacc6726e2821a497f1f08af358682750df2a6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:43:32.720549 env[1822]: time="2025-11-01T00:43:32.720463448Z" level=info msg="CreateContainer within sandbox \"01ab290d7b9394ba0e335b3acbdacc6726e2821a497f1f08af358682750df2a6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8af868a050fed176d64ca5615531f2663c1dc93bce5246a2d030017002f6145e\"" Nov 1 00:43:32.722924 env[1822]: time="2025-11-01T00:43:32.721190623Z" level=info msg="StartContainer for \"8af868a050fed176d64ca5615531f2663c1dc93bce5246a2d030017002f6145e\"" Nov 1 00:43:32.795174 env[1822]: time="2025-11-01T00:43:32.795079313Z" level=info msg="StartContainer for \"8af868a050fed176d64ca5615531f2663c1dc93bce5246a2d030017002f6145e\" returns successfully" Nov 1 00:43:33.376879 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:43:33.377046 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:43:33.787649 kubelet[2748]: I1101 00:43:33.778276 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8q4b9" podStartSLOduration=1.876650337 podStartE2EDuration="17.769503433s" podCreationTimestamp="2025-11-01 00:43:16 +0000 UTC" firstStartedPulling="2025-11-01 00:43:16.723039727 +0000 UTC m=+23.768812400" lastFinishedPulling="2025-11-01 00:43:32.615892829 +0000 UTC m=+39.661665496" observedRunningTime="2025-11-01 00:43:33.769044213 +0000 UTC m=+40.814816899" watchObservedRunningTime="2025-11-01 00:43:33.769503433 +0000 UTC m=+40.815276119" Nov 1 00:43:33.816051 env[1822]: time="2025-11-01T00:43:33.815995802Z" level=info msg="StopPodSandbox for \"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44\"" Nov 1 00:43:34.716696 kubelet[2748]: I1101 00:43:34.716665 2748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:43:35.067000 audit[4072]: AVC avc: denied { write } for pid=4072 comm="tee" name="fd" dev="proc" ino=25514 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:43:35.112776 kernel: audit: type=1400 audit(1761957815.067:298): avc: denied { write } for pid=4072 comm="tee" name="fd" dev="proc" ino=25514 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:43:35.112904 kernel: audit: type=1300 audit(1761957815.067:298): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe214717c5 a2=241 a3=1b6 items=1 ppid=4054 pid=4072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.112943 kernel: audit: type=1307 audit(1761957815.067:298): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Nov 1 00:43:35.112973 kernel: audit: type=1302 audit(1761957815.067:298): item=0 name="/dev/fd/63" inode=24532 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:35.067000 audit[4072]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe214717c5 a2=241 a3=1b6 items=1 ppid=4054 pid=4072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.067000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Nov 1 00:43:35.067000 audit: PATH item=0 name="/dev/fd/63" inode=24532 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:35.067000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:43:35.132456 kernel: audit: type=1327 audit(1761957815.067:298): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:43:35.155000 audit[4115]: AVC avc: denied { write } for pid=4115 comm="tee" name="fd" dev="proc" ino=25613 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:43:35.167542 kernel: audit: type=1400 audit(1761957815.155:299): avc: denied { write } for pid=4115 comm="tee" name="fd" dev="proc" ino=25613 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:43:35.155000 audit[4115]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff1c8db7d6 a2=241 a3=1b6 items=1 ppid=4063 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.182371 kernel: audit: type=1300 audit(1761957815.155:299): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff1c8db7d6 a2=241 a3=1b6 items=1 ppid=4063 pid=4115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.155000 audit: CWD cwd="/etc/service/enabled/bird/log" Nov 1 00:43:35.206366 kernel: audit: type=1307 audit(1761957815.155:299): cwd="/etc/service/enabled/bird/log" Nov 1 00:43:35.155000 audit: PATH item=0 name="/dev/fd/63" inode=25602 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:35.219369 kernel: audit: type=1302 audit(1761957815.155:299): item=0 name="/dev/fd/63" inode=25602 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:35.185000 audit[4117]: AVC avc: denied { write } for pid=4117 comm="tee" name="fd" dev="proc" ino=25519 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:43:35.240366 kernel: audit: type=1400 audit(1761957815.185:300): avc: denied { write } for pid=4117 comm="tee" name="fd" dev="proc" ino=25519 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:43:35.185000 audit[4117]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeeca827d5 a2=241 a3=1b6 items=1 ppid=4064 pid=4117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.185000 audit: CWD cwd="/etc/service/enabled/bird6/log" Nov 1 00:43:35.185000 audit: PATH item=0 name="/dev/fd/63" inode=25605 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:35.185000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:43:35.190000 audit[4111]: AVC avc: denied { write } for pid=4111 comm="tee" name="fd" dev="proc" ino=25624 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:43:35.190000 audit[4111]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd910ee7c6 a2=241 a3=1b6 items=1 ppid=4056 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.190000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Nov 1 00:43:35.190000 audit: PATH item=0 name="/dev/fd/63" inode=24573 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:35.190000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:43:35.185000 audit[4108]: AVC avc: denied { write } for pid=4108 comm="tee" name="fd" dev="proc" ino=25617 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:43:35.185000 audit[4108]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe1b8fa7d7 a2=241 a3=1b6 items=1 ppid=4061 pid=4108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.185000 audit: CWD cwd="/etc/service/enabled/cni/log" Nov 1 00:43:35.185000 audit: PATH item=0 name="/dev/fd/63" inode=24572 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:35.185000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:43:35.155000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:43:35.208000 audit[4123]: AVC avc: denied { write } for pid=4123 comm="tee" name="fd" dev="proc" ino=25526 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:43:35.208000 audit[4123]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdffbf27d5 a2=241 a3=1b6 items=1 ppid=4059 pid=4123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.208000 audit: CWD cwd="/etc/service/enabled/confd/log" Nov 1 00:43:35.208000 audit: PATH item=0 name="/dev/fd/63" inode=25609 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:35.208000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:43:35.228000 audit[4120]: AVC avc: denied { write } for pid=4120 comm="tee" name="fd" dev="proc" ino=25528 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 00:43:35.228000 audit[4120]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd8f52d7d5 a2=241 a3=1b6 items=1 ppid=4076 pid=4120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.228000 audit: CWD cwd="/etc/service/enabled/felix/log" Nov 1 00:43:35.228000 audit: PATH item=0 name="/dev/fd/63" inode=25608 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:43:35.228000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 00:43:35.638000 audit[4167]: AVC avc: denied { bpf } for pid=4167 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.638000 audit[4167]: AVC avc: denied { bpf } for pid=4167 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.638000 audit[4167]: AVC avc: denied { perfmon } for pid=4167 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.638000 audit[4167]: AVC avc: denied { perfmon } for pid=4167 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.638000 audit[4167]: AVC avc: denied { perfmon } for pid=4167 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.638000 audit[4167]: AVC avc: denied { perfmon } for pid=4167 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.638000 audit[4167]: AVC avc: denied { perfmon } for pid=4167 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.638000 audit[4167]: AVC avc: denied { bpf } for pid=4167 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.638000 audit[4167]: AVC avc: denied { bpf } for pid=4167 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.638000 audit: BPF prog-id=10 op=LOAD Nov 1 00:43:35.638000 audit[4167]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffda69b480 a2=98 a3=1fffffffffffffff items=0 ppid=4079 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.638000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:43:35.638000 audit: BPF prog-id=10 op=UNLOAD Nov 1 00:43:35.639000 audit[4167]: AVC avc: denied { bpf } for pid=4167 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.639000 audit[4167]: AVC avc: denied { bpf } for pid=4167 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.639000 audit[4167]: AVC avc: denied { perfmon } for pid=4167 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.639000 audit[4167]: AVC avc: denied { perfmon } for pid=4167 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.639000 audit[4167]: AVC avc: denied { perfmon } for pid=4167 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.639000 audit[4167]: AVC avc: denied { perfmon } for pid=4167 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.639000 audit[4167]: AVC avc: denied { perfmon } for pid=4167 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.639000 audit[4167]: AVC avc: denied { bpf } for pid=4167 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.639000 audit[4167]: AVC avc: denied { bpf } for pid=4167 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.639000 audit: BPF prog-id=11 op=LOAD Nov 1 00:43:35.639000 audit[4167]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffda69b360 a2=94 a3=3 items=0 ppid=4079 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.639000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:43:35.639000 audit: BPF prog-id=11 op=UNLOAD Nov 1 00:43:35.639000 audit[4167]: AVC avc: denied { bpf } for pid=4167 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.639000 audit[4167]: AVC avc: denied { bpf } for pid=4167 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.639000 audit[4167]: AVC avc: denied { perfmon } for pid=4167 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.639000 audit[4167]: AVC avc: denied { perfmon } for pid=4167 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.639000 audit[4167]: AVC avc: denied { perfmon } for pid=4167 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.639000 audit[4167]: AVC avc: denied { perfmon } for pid=4167 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.639000 audit[4167]: AVC avc: denied { perfmon } for pid=4167 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.639000 audit[4167]: AVC avc: denied { bpf } for pid=4167 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.639000 audit[4167]: AVC avc: denied { bpf } for pid=4167 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.639000 audit: BPF prog-id=12 op=LOAD Nov 1 00:43:35.639000 audit[4167]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffda69b3a0 a2=94 a3=7fffda69b580 items=0 ppid=4079 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.639000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:43:35.639000 audit: BPF prog-id=12 op=UNLOAD Nov 1 00:43:35.639000 audit[4167]: AVC avc: denied { perfmon } for pid=4167 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.639000 audit[4167]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7fffda69b470 a2=50 a3=a000000085 items=0 ppid=4079 pid=4167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.639000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 00:43:35.644000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.644000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.644000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.644000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.644000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.644000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.644000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.644000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.644000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.644000 audit: BPF prog-id=13 op=LOAD Nov 1 00:43:35.644000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe978ced10 a2=98 a3=3 items=0 ppid=4079 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.644000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:43:35.644000 audit: BPF prog-id=13 op=UNLOAD Nov 1 00:43:35.645000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.645000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.645000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.645000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.645000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.645000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.645000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.645000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.645000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.645000 audit: BPF prog-id=14 op=LOAD Nov 1 00:43:35.645000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe978ceb00 a2=94 a3=54428f items=0 ppid=4079 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.645000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:43:35.646000 audit: BPF prog-id=14 op=UNLOAD Nov 1 00:43:35.646000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.646000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.646000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.646000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.646000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.646000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.646000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.646000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.646000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.646000 audit: BPF prog-id=15 op=LOAD Nov 1 00:43:35.646000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe978ceb30 a2=94 a3=2 items=0 ppid=4079 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.646000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:43:35.646000 audit: BPF prog-id=15 op=UNLOAD Nov 1 00:43:35.767000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.767000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.767000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.767000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.767000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.767000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.767000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.767000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.767000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.767000 audit: BPF prog-id=16 op=LOAD Nov 1 00:43:35.767000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe978ce9f0 a2=94 a3=1 items=0 ppid=4079 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.767000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:43:35.767000 audit: BPF prog-id=16 op=UNLOAD Nov 1 00:43:35.767000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.767000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe978ceac0 a2=50 a3=7ffe978ceba0 items=0 ppid=4079 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.767000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:43:35.779000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.779000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe978cea00 a2=28 a3=0 items=0 ppid=4079 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.779000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:43:35.779000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.779000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe978cea30 a2=28 a3=0 items=0 ppid=4079 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.779000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:43:35.779000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.779000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe978ce940 a2=28 a3=0 items=0 ppid=4079 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.779000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:43:35.779000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.779000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe978cea50 a2=28 a3=0 items=0 ppid=4079 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.779000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:43:35.779000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.779000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe978cea30 a2=28 a3=0 items=0 ppid=4079 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.779000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:43:35.779000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.779000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe978cea20 a2=28 a3=0 items=0 ppid=4079 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.779000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:43:35.779000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.779000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe978cea50 a2=28 a3=0 items=0 ppid=4079 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.779000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:43:35.779000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.779000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe978cea30 a2=28 a3=0 items=0 ppid=4079 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.779000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:43:35.779000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.779000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe978cea50 a2=28 a3=0 items=0 ppid=4079 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.779000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:43:35.779000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.779000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe978cea20 a2=28 a3=0 items=0 ppid=4079 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.779000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:43:35.779000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.779000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe978cea90 a2=28 a3=0 items=0 ppid=4079 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.779000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:43:35.779000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.779000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe978ce840 a2=50 a3=1 items=0 ppid=4079 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.779000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:43:35.779000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.779000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.779000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.779000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.779000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.779000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.779000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.779000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.779000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.779000 audit: BPF prog-id=17 op=LOAD Nov 1 00:43:35.779000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe978ce840 a2=94 a3=5 items=0 ppid=4079 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.779000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:43:35.780000 audit: BPF prog-id=17 op=UNLOAD Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe978ce8f0 a2=50 a3=1 items=0 ppid=4079 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.780000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe978cea10 a2=4 a3=38 items=0 ppid=4079 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.780000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { confidentiality } for pid=4168 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:43:35.780000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe978cea60 a2=94 a3=6 items=0 ppid=4079 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.780000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { confidentiality } for pid=4168 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:43:35.780000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe978ce210 a2=94 a3=88 items=0 ppid=4079 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.780000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { perfmon } for pid=4168 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { bpf } for pid=4168 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.780000 audit[4168]: AVC avc: denied { confidentiality } for pid=4168 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:43:35.780000 audit[4168]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe978ce210 a2=94 a3=88 items=0 ppid=4079 pid=4168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.780000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 00:43:35.792000 audit[4171]: AVC avc: denied { bpf } for pid=4171 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.792000 audit[4171]: AVC avc: denied { bpf } for pid=4171 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.792000 audit[4171]: AVC avc: denied { perfmon } for pid=4171 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.792000 audit[4171]: AVC avc: denied { perfmon } for pid=4171 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.792000 audit[4171]: AVC avc: denied { perfmon } for pid=4171 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.792000 audit[4171]: AVC avc: denied { perfmon } for pid=4171 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.792000 audit[4171]: AVC avc: denied { perfmon } for pid=4171 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.792000 audit[4171]: AVC avc: denied { bpf } for pid=4171 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.792000 audit[4171]: AVC avc: denied { bpf } for pid=4171 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.792000 audit: BPF prog-id=18 op=LOAD Nov 1 00:43:35.792000 audit[4171]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc40cf31c0 a2=98 a3=1999999999999999 items=0 ppid=4079 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.792000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 00:43:35.792000 audit: BPF prog-id=18 op=UNLOAD Nov 1 00:43:35.793000 audit[4171]: AVC avc: denied { bpf } for pid=4171 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.793000 audit[4171]: AVC avc: denied { bpf } for pid=4171 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.793000 audit[4171]: AVC avc: denied { perfmon } for pid=4171 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.793000 audit[4171]: AVC avc: denied { perfmon } for pid=4171 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.793000 audit[4171]: AVC avc: denied { perfmon } for pid=4171 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.793000 audit[4171]: AVC avc: denied { perfmon } for pid=4171 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.793000 audit[4171]: AVC avc: denied { perfmon } for pid=4171 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.793000 audit[4171]: AVC avc: denied { bpf } for pid=4171 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.793000 audit[4171]: AVC avc: denied { bpf } for pid=4171 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.793000 audit: BPF prog-id=19 op=LOAD Nov 1 00:43:35.793000 audit[4171]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc40cf30a0 a2=94 a3=ffff items=0 ppid=4079 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.793000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 00:43:35.793000 audit: BPF prog-id=19 op=UNLOAD Nov 1 00:43:35.793000 audit[4171]: AVC avc: denied { bpf } for pid=4171 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.793000 audit[4171]: AVC avc: denied { bpf } for pid=4171 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.793000 audit[4171]: AVC avc: denied { perfmon } for pid=4171 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.793000 audit[4171]: AVC avc: denied { perfmon } for pid=4171 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.793000 audit[4171]: AVC avc: denied { perfmon } for pid=4171 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.793000 audit[4171]: AVC avc: denied { perfmon } for pid=4171 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.793000 audit[4171]: AVC avc: denied { perfmon } for pid=4171 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.793000 audit[4171]: AVC avc: denied { bpf } for pid=4171 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.793000 audit[4171]: AVC avc: denied { bpf } for pid=4171 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.793000 audit: BPF prog-id=20 op=LOAD Nov 1 00:43:35.793000 audit[4171]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc40cf30e0 a2=94 a3=7ffc40cf32c0 items=0 ppid=4079 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.793000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 00:43:35.793000 audit: BPF prog-id=20 op=UNLOAD Nov 1 00:43:35.888465 (udev-worker)[4000]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:43:35.895409 systemd-networkd[1497]: vxlan.calico: Link UP Nov 1 00:43:35.895419 systemd-networkd[1497]: vxlan.calico: Gained carrier Nov 1 00:43:35.923000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.923000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.923000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.923000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.923000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.923000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.923000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.923000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.923000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.923000 audit: BPF prog-id=21 op=LOAD Nov 1 00:43:35.923000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff4a77ee0 a2=98 a3=0 items=0 ppid=4079 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.923000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:43:35.923000 audit: BPF prog-id=21 op=UNLOAD Nov 1 00:43:35.926292 (udev-worker)[4002]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:43:35.927622 (udev-worker)[4001]: Network interface NamePolicy= disabled on kernel command line. Nov 1 00:43:35.958000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.958000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.958000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.958000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.958000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.958000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.958000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.958000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.958000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.958000 audit: BPF prog-id=22 op=LOAD Nov 1 00:43:35.958000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff4a77cf0 a2=94 a3=54428f items=0 ppid=4079 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.958000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:43:35.958000 audit: BPF prog-id=22 op=UNLOAD Nov 1 00:43:35.958000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.958000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.958000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.958000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.958000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.958000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.958000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.958000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.958000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.958000 audit: BPF prog-id=23 op=LOAD Nov 1 00:43:35.958000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff4a77d20 a2=94 a3=2 items=0 ppid=4079 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.958000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:43:35.958000 audit: BPF prog-id=23 op=UNLOAD Nov 1 00:43:35.958000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.958000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffff4a77bf0 a2=28 a3=0 items=0 ppid=4079 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.958000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:43:35.958000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.958000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffff4a77c20 a2=28 a3=0 items=0 ppid=4079 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.958000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:43:35.958000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.958000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffff4a77b30 a2=28 a3=0 items=0 ppid=4079 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.958000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:43:35.958000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.958000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffff4a77c40 a2=28 a3=0 items=0 ppid=4079 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.958000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:43:35.959000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.959000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffff4a77c20 a2=28 a3=0 items=0 ppid=4079 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.959000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:43:35.959000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.959000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffff4a77c10 a2=28 a3=0 items=0 ppid=4079 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.959000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:43:35.959000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.959000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffff4a77c40 a2=28 a3=0 items=0 ppid=4079 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.959000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:43:35.959000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.959000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffff4a77c20 a2=28 a3=0 items=0 ppid=4079 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.959000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:43:35.959000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.959000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffff4a77c40 a2=28 a3=0 items=0 ppid=4079 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.959000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:43:35.959000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.959000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffff4a77c10 a2=28 a3=0 items=0 ppid=4079 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.959000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:43:35.959000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.959000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffff4a77c80 a2=28 a3=0 items=0 ppid=4079 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.959000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:43:35.959000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.959000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.959000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.959000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.959000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.959000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.959000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.959000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.959000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.959000 audit: BPF prog-id=24 op=LOAD Nov 1 00:43:35.959000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffff4a77af0 a2=94 a3=0 items=0 ppid=4079 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.959000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:43:35.959000 audit: BPF prog-id=24 op=UNLOAD Nov 1 00:43:35.960000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.960000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffff4a77ae0 a2=50 a3=2800 items=0 ppid=4079 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.960000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:43:35.960000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.960000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffff4a77ae0 a2=50 a3=2800 items=0 ppid=4079 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.960000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:43:35.960000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.960000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.960000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.960000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.960000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.960000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.960000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.960000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.960000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.960000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.960000 audit: BPF prog-id=25 op=LOAD Nov 1 00:43:35.960000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffff4a77300 a2=94 a3=2 items=0 ppid=4079 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.960000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:43:35.960000 audit: BPF prog-id=25 op=UNLOAD Nov 1 00:43:35.960000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.960000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.960000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.960000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.960000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.960000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.960000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.960000 audit[4196]: AVC avc: denied { perfmon } for pid=4196 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.960000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.960000 audit[4196]: AVC avc: denied { bpf } for pid=4196 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.960000 audit: BPF prog-id=26 op=LOAD Nov 1 00:43:35.960000 audit[4196]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffff4a77400 a2=94 a3=30 items=0 ppid=4079 pid=4196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.960000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 00:43:35.993000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.993000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.993000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.993000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.993000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.993000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.993000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.993000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.993000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.993000 audit: BPF prog-id=27 op=LOAD Nov 1 00:43:35.993000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff5395f620 a2=98 a3=0 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.993000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:35.994000 audit: BPF prog-id=27 op=UNLOAD Nov 1 00:43:35.995000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.995000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.995000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.995000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.995000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.995000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.995000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.995000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.995000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.995000 audit: BPF prog-id=28 op=LOAD Nov 1 00:43:35.995000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff5395f410 a2=94 a3=54428f items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.995000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:35.995000 audit: BPF prog-id=28 op=UNLOAD Nov 1 00:43:35.995000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.995000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.995000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.995000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.995000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.995000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.995000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.995000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.995000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:35.995000 audit: BPF prog-id=29 op=LOAD Nov 1 00:43:35.995000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff5395f440 a2=94 a3=2 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:35.995000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:35.996000 audit: BPF prog-id=29 op=UNLOAD Nov 1 00:43:36.130000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.130000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.130000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.130000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.130000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.130000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.130000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.130000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.130000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.130000 audit: BPF prog-id=30 op=LOAD Nov 1 00:43:36.130000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff5395f300 a2=94 a3=1 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.130000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.130000 audit: BPF prog-id=30 op=UNLOAD Nov 1 00:43:36.130000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.130000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff5395f3d0 a2=50 a3=7fff5395f4b0 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.130000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.145000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.145000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff5395f310 a2=28 a3=0 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.145000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.146000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.146000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff5395f340 a2=28 a3=0 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.146000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.146000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.146000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff5395f250 a2=28 a3=0 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.146000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.146000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.146000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff5395f360 a2=28 a3=0 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.146000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.146000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.146000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff5395f340 a2=28 a3=0 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.146000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.146000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.146000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff5395f330 a2=28 a3=0 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.146000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.146000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.146000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff5395f360 a2=28 a3=0 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.146000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.146000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.146000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff5395f340 a2=28 a3=0 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.146000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.146000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.146000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff5395f360 a2=28 a3=0 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.146000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.146000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.146000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff5395f330 a2=28 a3=0 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.146000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.146000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.146000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff5395f3a0 a2=28 a3=0 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.146000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.147000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.147000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff5395f150 a2=50 a3=1 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.147000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.147000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.147000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.147000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.147000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.147000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.147000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.147000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.147000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.147000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.147000 audit: BPF prog-id=31 op=LOAD Nov 1 00:43:36.147000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff5395f150 a2=94 a3=5 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.147000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.147000 audit: BPF prog-id=31 op=UNLOAD Nov 1 00:43:36.147000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.147000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff5395f200 a2=50 a3=1 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.147000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.147000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.147000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fff5395f320 a2=4 a3=38 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.147000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.147000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.147000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.147000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.147000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.147000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.147000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.147000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.147000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.147000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.147000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.147000 audit[4202]: AVC avc: denied { confidentiality } for pid=4202 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:43:36.147000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff5395f370 a2=94 a3=6 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.147000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.148000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.148000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.148000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.148000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.148000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.148000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.148000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.148000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.148000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.148000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.148000 audit[4202]: AVC avc: denied { confidentiality } for pid=4202 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:43:36.148000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff5395eb20 a2=94 a3=88 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.148000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.148000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.148000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.148000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.148000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.148000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.148000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.148000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.148000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.148000 audit[4202]: AVC avc: denied { perfmon } for pid=4202 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.148000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.148000 audit[4202]: AVC avc: denied { confidentiality } for pid=4202 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 00:43:36.148000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff5395eb20 a2=94 a3=88 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.148000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.150000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.150000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff53960550 a2=10 a3=f8f00800 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.150000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.152000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.152000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff539603f0 a2=10 a3=3 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.152000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.152000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.152000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff53960390 a2=10 a3=3 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.152000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.152000 audit[4202]: AVC avc: denied { bpf } for pid=4202 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 00:43:36.152000 audit[4202]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff53960390 a2=10 a3=7 items=0 ppid=4079 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.152000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 00:43:36.168000 audit: BPF prog-id=26 op=UNLOAD Nov 1 00:43:36.265000 audit[4227]: NETFILTER_CFG table=mangle:105 family=2 entries=16 op=nft_register_chain pid=4227 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:43:36.265000 audit[4227]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffda3cd5b20 a2=0 a3=7ffda3cd5b0c items=0 ppid=4079 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.265000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:43:36.283000 audit[4228]: NETFILTER_CFG table=nat:106 family=2 entries=15 op=nft_register_chain pid=4228 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:43:36.283000 audit[4228]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffca3400d50 a2=0 a3=7ffca3400d3c items=0 ppid=4079 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.283000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:43:36.285000 audit[4232]: NETFILTER_CFG table=filter:107 family=2 entries=39 op=nft_register_chain pid=4232 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:43:36.285000 audit[4232]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffdb0cfaf10 a2=0 a3=7ffdb0cfaefc items=0 ppid=4079 pid=4232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.285000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:43:36.291000 audit[4226]: NETFILTER_CFG table=raw:108 family=2 entries=21 op=nft_register_chain pid=4226 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:43:36.291000 audit[4226]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fffa515dc30 a2=0 a3=7fffa515dc1c items=0 ppid=4079 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:36.291000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:43:36.313871 env[1822]: 2025-11-01 00:43:34.359 [INFO][4024] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" Nov 1 00:43:36.313871 env[1822]: 2025-11-01 00:43:34.360 [INFO][4024] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" iface="eth0" netns="/var/run/netns/cni-19213180-0152-e6d4-aded-ab9a94e3fb69" Nov 1 00:43:36.313871 env[1822]: 2025-11-01 00:43:34.361 [INFO][4024] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" iface="eth0" netns="/var/run/netns/cni-19213180-0152-e6d4-aded-ab9a94e3fb69" Nov 1 00:43:36.313871 env[1822]: 2025-11-01 00:43:34.362 [INFO][4024] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" iface="eth0" netns="/var/run/netns/cni-19213180-0152-e6d4-aded-ab9a94e3fb69" Nov 1 00:43:36.313871 env[1822]: 2025-11-01 00:43:34.362 [INFO][4024] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" Nov 1 00:43:36.313871 env[1822]: 2025-11-01 00:43:34.362 [INFO][4024] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" Nov 1 00:43:36.313871 env[1822]: 2025-11-01 00:43:36.289 [INFO][4042] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" HandleID="k8s-pod-network.cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" Workload="ip--172--31--19--28-k8s-whisker--7f96c4696b--wc2sq-eth0" Nov 1 00:43:36.313871 env[1822]: 2025-11-01 00:43:36.292 [INFO][4042] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:36.313871 env[1822]: 2025-11-01 00:43:36.293 [INFO][4042] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:36.313871 env[1822]: 2025-11-01 00:43:36.306 [WARNING][4042] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" HandleID="k8s-pod-network.cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" Workload="ip--172--31--19--28-k8s-whisker--7f96c4696b--wc2sq-eth0" Nov 1 00:43:36.313871 env[1822]: 2025-11-01 00:43:36.306 [INFO][4042] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" HandleID="k8s-pod-network.cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" Workload="ip--172--31--19--28-k8s-whisker--7f96c4696b--wc2sq-eth0" Nov 1 00:43:36.313871 env[1822]: 2025-11-01 00:43:36.308 [INFO][4042] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:36.313871 env[1822]: 2025-11-01 00:43:36.310 [INFO][4024] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" Nov 1 00:43:36.317985 systemd[1]: run-netns-cni\x2d19213180\x2d0152\x2de6d4\x2daded\x2dab9a94e3fb69.mount: Deactivated successfully. Nov 1 00:43:36.320184 env[1822]: time="2025-11-01T00:43:36.319177598Z" level=info msg="TearDown network for sandbox \"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44\" successfully" Nov 1 00:43:36.320184 env[1822]: time="2025-11-01T00:43:36.319222136Z" level=info msg="StopPodSandbox for \"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44\" returns successfully" Nov 1 00:43:36.454230 kubelet[2748]: I1101 00:43:36.452937 2748 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fa378298-f74d-4607-95be-88748265623e-whisker-backend-key-pair\") pod \"fa378298-f74d-4607-95be-88748265623e\" (UID: \"fa378298-f74d-4607-95be-88748265623e\") " Nov 1 00:43:36.454230 kubelet[2748]: I1101 00:43:36.453051 2748 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zwgw\" (UniqueName: \"kubernetes.io/projected/fa378298-f74d-4607-95be-88748265623e-kube-api-access-6zwgw\") pod \"fa378298-f74d-4607-95be-88748265623e\" (UID: \"fa378298-f74d-4607-95be-88748265623e\") " Nov 1 00:43:36.456294 kubelet[2748]: I1101 00:43:36.456243 2748 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa378298-f74d-4607-95be-88748265623e-whisker-ca-bundle\") pod \"fa378298-f74d-4607-95be-88748265623e\" (UID: \"fa378298-f74d-4607-95be-88748265623e\") " Nov 1 00:43:36.463044 systemd[1]: var-lib-kubelet-pods-fa378298\x2df74d\x2d4607\x2d95be\x2d88748265623e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:43:36.466261 systemd[1]: var-lib-kubelet-pods-fa378298\x2df74d\x2d4607\x2d95be\x2d88748265623e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6zwgw.mount: Deactivated successfully. Nov 1 00:43:36.468942 kubelet[2748]: I1101 00:43:36.468897 2748 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa378298-f74d-4607-95be-88748265623e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fa378298-f74d-4607-95be-88748265623e" (UID: "fa378298-f74d-4607-95be-88748265623e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:43:36.470881 kubelet[2748]: I1101 00:43:36.470731 2748 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa378298-f74d-4607-95be-88748265623e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fa378298-f74d-4607-95be-88748265623e" (UID: "fa378298-f74d-4607-95be-88748265623e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:43:36.475573 kubelet[2748]: I1101 00:43:36.466281 2748 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa378298-f74d-4607-95be-88748265623e-kube-api-access-6zwgw" (OuterVolumeSpecName: "kube-api-access-6zwgw") pod "fa378298-f74d-4607-95be-88748265623e" (UID: "fa378298-f74d-4607-95be-88748265623e"). InnerVolumeSpecName "kube-api-access-6zwgw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:43:36.559082 kubelet[2748]: I1101 00:43:36.559016 2748 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa378298-f74d-4607-95be-88748265623e-whisker-ca-bundle\") on node \"ip-172-31-19-28\" DevicePath \"\"" Nov 1 00:43:36.559082 kubelet[2748]: I1101 00:43:36.559078 2748 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fa378298-f74d-4607-95be-88748265623e-whisker-backend-key-pair\") on node \"ip-172-31-19-28\" DevicePath \"\"" Nov 1 00:43:36.559082 kubelet[2748]: I1101 00:43:36.559092 2748 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6zwgw\" (UniqueName: \"kubernetes.io/projected/fa378298-f74d-4607-95be-88748265623e-kube-api-access-6zwgw\") on node \"ip-172-31-19-28\" DevicePath \"\"" Nov 1 00:43:37.063029 kubelet[2748]: I1101 00:43:37.062984 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vkgl\" (UniqueName: \"kubernetes.io/projected/3c464f2f-5c33-4de3-9c8e-29f197089e35-kube-api-access-8vkgl\") pod \"whisker-78c5cf549b-vmbkl\" (UID: \"3c464f2f-5c33-4de3-9c8e-29f197089e35\") " pod="calico-system/whisker-78c5cf549b-vmbkl" Nov 1 00:43:37.063029 kubelet[2748]: I1101 00:43:37.063033 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3c464f2f-5c33-4de3-9c8e-29f197089e35-whisker-backend-key-pair\") pod \"whisker-78c5cf549b-vmbkl\" (UID: \"3c464f2f-5c33-4de3-9c8e-29f197089e35\") " pod="calico-system/whisker-78c5cf549b-vmbkl" Nov 1 00:43:37.063244 kubelet[2748]: I1101 00:43:37.063057 2748 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c464f2f-5c33-4de3-9c8e-29f197089e35-whisker-ca-bundle\") pod \"whisker-78c5cf549b-vmbkl\" (UID: \"3c464f2f-5c33-4de3-9c8e-29f197089e35\") " pod="calico-system/whisker-78c5cf549b-vmbkl" Nov 1 00:43:37.210188 env[1822]: time="2025-11-01T00:43:37.210115863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78c5cf549b-vmbkl,Uid:3c464f2f-5c33-4de3-9c8e-29f197089e35,Namespace:calico-system,Attempt:0,}" Nov 1 00:43:37.351690 kubelet[2748]: I1101 00:43:37.351370 2748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa378298-f74d-4607-95be-88748265623e" path="/var/lib/kubelet/pods/fa378298-f74d-4607-95be-88748265623e/volumes" Nov 1 00:43:37.467676 systemd-networkd[1497]: cali53aff5cf812: Link UP Nov 1 00:43:37.469090 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:43:37.470058 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali53aff5cf812: link becomes ready Nov 1 00:43:37.470502 systemd-networkd[1497]: cali53aff5cf812: Gained carrier Nov 1 00:43:37.492026 env[1822]: 2025-11-01 00:43:37.265 [INFO][4245] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--28-k8s-whisker--78c5cf549b--vmbkl-eth0 whisker-78c5cf549b- calico-system 3c464f2f-5c33-4de3-9c8e-29f197089e35 892 0 2025-11-01 00:43:36 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78c5cf549b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-19-28 whisker-78c5cf549b-vmbkl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali53aff5cf812 [] [] }} ContainerID="729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde" Namespace="calico-system" Pod="whisker-78c5cf549b-vmbkl" WorkloadEndpoint="ip--172--31--19--28-k8s-whisker--78c5cf549b--vmbkl-" Nov 1 00:43:37.492026 env[1822]: 2025-11-01 00:43:37.265 [INFO][4245] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde" Namespace="calico-system" Pod="whisker-78c5cf549b-vmbkl" WorkloadEndpoint="ip--172--31--19--28-k8s-whisker--78c5cf549b--vmbkl-eth0" Nov 1 00:43:37.492026 env[1822]: 2025-11-01 00:43:37.310 [INFO][4258] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde" HandleID="k8s-pod-network.729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde" Workload="ip--172--31--19--28-k8s-whisker--78c5cf549b--vmbkl-eth0" Nov 1 00:43:37.492026 env[1822]: 2025-11-01 00:43:37.311 [INFO][4258] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde" HandleID="k8s-pod-network.729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde" Workload="ip--172--31--19--28-k8s-whisker--78c5cf549b--vmbkl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5820), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-28", "pod":"whisker-78c5cf549b-vmbkl", "timestamp":"2025-11-01 00:43:37.310862106 +0000 UTC"}, Hostname:"ip-172-31-19-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:43:37.492026 env[1822]: 2025-11-01 00:43:37.311 [INFO][4258] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:37.492026 env[1822]: 2025-11-01 00:43:37.311 [INFO][4258] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:37.492026 env[1822]: 2025-11-01 00:43:37.311 [INFO][4258] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-28' Nov 1 00:43:37.492026 env[1822]: 2025-11-01 00:43:37.328 [INFO][4258] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde" host="ip-172-31-19-28" Nov 1 00:43:37.492026 env[1822]: 2025-11-01 00:43:37.416 [INFO][4258] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-28" Nov 1 00:43:37.492026 env[1822]: 2025-11-01 00:43:37.427 [INFO][4258] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:37.492026 env[1822]: 2025-11-01 00:43:37.433 [INFO][4258] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:37.492026 env[1822]: 2025-11-01 00:43:37.436 [INFO][4258] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:37.492026 env[1822]: 2025-11-01 00:43:37.436 [INFO][4258] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde" host="ip-172-31-19-28" Nov 1 00:43:37.492026 env[1822]: 2025-11-01 00:43:37.438 [INFO][4258] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde Nov 1 00:43:37.492026 env[1822]: 2025-11-01 00:43:37.446 [INFO][4258] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde" host="ip-172-31-19-28" Nov 1 00:43:37.492026 env[1822]: 2025-11-01 00:43:37.453 [INFO][4258] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.65/26] block=192.168.50.64/26 handle="k8s-pod-network.729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde" host="ip-172-31-19-28" Nov 1 00:43:37.492026 env[1822]: 2025-11-01 00:43:37.453 [INFO][4258] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.65/26] handle="k8s-pod-network.729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde" host="ip-172-31-19-28" Nov 1 00:43:37.492026 env[1822]: 2025-11-01 00:43:37.453 [INFO][4258] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:37.492026 env[1822]: 2025-11-01 00:43:37.454 [INFO][4258] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.65/26] IPv6=[] ContainerID="729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde" HandleID="k8s-pod-network.729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde" Workload="ip--172--31--19--28-k8s-whisker--78c5cf549b--vmbkl-eth0" Nov 1 00:43:37.493790 env[1822]: 2025-11-01 00:43:37.457 [INFO][4245] cni-plugin/k8s.go 418: Populated endpoint ContainerID="729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde" Namespace="calico-system" Pod="whisker-78c5cf549b-vmbkl" WorkloadEndpoint="ip--172--31--19--28-k8s-whisker--78c5cf549b--vmbkl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-whisker--78c5cf549b--vmbkl-eth0", GenerateName:"whisker-78c5cf549b-", Namespace:"calico-system", SelfLink:"", UID:"3c464f2f-5c33-4de3-9c8e-29f197089e35", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78c5cf549b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"", Pod:"whisker-78c5cf549b-vmbkl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali53aff5cf812", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:37.493790 env[1822]: 2025-11-01 00:43:37.457 [INFO][4245] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.65/32] ContainerID="729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde" Namespace="calico-system" Pod="whisker-78c5cf549b-vmbkl" WorkloadEndpoint="ip--172--31--19--28-k8s-whisker--78c5cf549b--vmbkl-eth0" Nov 1 00:43:37.493790 env[1822]: 2025-11-01 00:43:37.457 [INFO][4245] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53aff5cf812 ContainerID="729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde" Namespace="calico-system" Pod="whisker-78c5cf549b-vmbkl" WorkloadEndpoint="ip--172--31--19--28-k8s-whisker--78c5cf549b--vmbkl-eth0" Nov 1 00:43:37.493790 env[1822]: 2025-11-01 00:43:37.469 [INFO][4245] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde" Namespace="calico-system" Pod="whisker-78c5cf549b-vmbkl" WorkloadEndpoint="ip--172--31--19--28-k8s-whisker--78c5cf549b--vmbkl-eth0" Nov 1 00:43:37.493790 env[1822]: 2025-11-01 00:43:37.469 [INFO][4245] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde" Namespace="calico-system" Pod="whisker-78c5cf549b-vmbkl" WorkloadEndpoint="ip--172--31--19--28-k8s-whisker--78c5cf549b--vmbkl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-whisker--78c5cf549b--vmbkl-eth0", GenerateName:"whisker-78c5cf549b-", Namespace:"calico-system", SelfLink:"", UID:"3c464f2f-5c33-4de3-9c8e-29f197089e35", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78c5cf549b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde", Pod:"whisker-78c5cf549b-vmbkl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali53aff5cf812", MAC:"c6:44:7f:7a:e0:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:37.493790 env[1822]: 2025-11-01 00:43:37.484 [INFO][4245] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde" Namespace="calico-system" Pod="whisker-78c5cf549b-vmbkl" WorkloadEndpoint="ip--172--31--19--28-k8s-whisker--78c5cf549b--vmbkl-eth0" Nov 1 00:43:37.515368 env[1822]: time="2025-11-01T00:43:37.514450206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:37.515368 env[1822]: time="2025-11-01T00:43:37.514552353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:37.515368 env[1822]: time="2025-11-01T00:43:37.514583347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:37.515368 env[1822]: time="2025-11-01T00:43:37.514817462Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde pid=4282 runtime=io.containerd.runc.v2 Nov 1 00:43:37.571712 systemd[1]: run-containerd-runc-k8s.io-729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde-runc.TrqW2Y.mount: Deactivated successfully. Nov 1 00:43:37.533000 audit[4290]: NETFILTER_CFG table=filter:109 family=2 entries=59 op=nft_register_chain pid=4290 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:43:37.533000 audit[4290]: SYSCALL arch=c000003e syscall=46 success=yes exit=35860 a0=3 a1=7ffd702546d0 a2=0 a3=7ffd702546bc items=0 ppid=4079 pid=4290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:37.533000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:43:37.617488 env[1822]: time="2025-11-01T00:43:37.617333716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78c5cf549b-vmbkl,Uid:3c464f2f-5c33-4de3-9c8e-29f197089e35,Namespace:calico-system,Attempt:0,} returns sandbox id \"729613e261b48e82045b4f3b3db2bea6e07252f2bf5d34e7d49603ea25257dde\"" Nov 1 00:43:37.622423 env[1822]: time="2025-11-01T00:43:37.622374445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:43:37.688599 systemd-networkd[1497]: vxlan.calico: Gained IPv6LL Nov 1 00:43:37.873348 env[1822]: time="2025-11-01T00:43:37.873186118Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:43:37.874482 env[1822]: time="2025-11-01T00:43:37.874358379Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:43:37.877144 kubelet[2748]: E1101 00:43:37.877085 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:43:37.877664 kubelet[2748]: E1101 00:43:37.877172 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:43:37.882563 kubelet[2748]: E1101 00:43:37.882503 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b7379b16cfad4b00a1e9214c9508a19a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8vkgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78c5cf549b-vmbkl_calico-system(3c464f2f-5c33-4de3-9c8e-29f197089e35): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:43:37.885146 env[1822]: time="2025-11-01T00:43:37.885106737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:43:38.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.19.28:22-147.75.109.163:51916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:38.094840 systemd[1]: Started sshd@7-172.31.19.28:22-147.75.109.163:51916.service. Nov 1 00:43:38.125465 env[1822]: time="2025-11-01T00:43:38.125024796Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:43:38.126396 env[1822]: time="2025-11-01T00:43:38.126168713Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:43:38.127109 kubelet[2748]: E1101 00:43:38.127057 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:43:38.127285 kubelet[2748]: E1101 00:43:38.127267 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:43:38.128477 kubelet[2748]: E1101 00:43:38.127538 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8vkgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78c5cf549b-vmbkl_calico-system(3c464f2f-5c33-4de3-9c8e-29f197089e35): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:43:38.133289 kubelet[2748]: E1101 00:43:38.133212 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78c5cf549b-vmbkl" podUID="3c464f2f-5c33-4de3-9c8e-29f197089e35" Nov 1 00:43:38.311000 audit[4325]: USER_ACCT pid=4325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:38.313009 sshd[4325]: Accepted publickey for core from 147.75.109.163 port 51916 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:43:38.313000 audit[4325]: CRED_ACQ pid=4325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:38.313000 audit[4325]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe8b5c8a60 a2=3 a3=0 items=0 ppid=1 pid=4325 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:38.313000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:43:38.317776 sshd[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:38.329740 systemd[1]: Started session-8.scope. Nov 1 00:43:38.330081 systemd-logind[1803]: New session 8 of user core. Nov 1 00:43:38.335000 audit[4325]: USER_START pid=4325 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:38.337000 audit[4328]: CRED_ACQ pid=4328 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:38.351773 env[1822]: time="2025-11-01T00:43:38.349256473Z" level=info msg="StopPodSandbox for \"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1\"" Nov 1 00:43:38.471699 env[1822]: 2025-11-01 00:43:38.407 [INFO][4340] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" Nov 1 00:43:38.471699 env[1822]: 2025-11-01 00:43:38.407 [INFO][4340] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" iface="eth0" netns="/var/run/netns/cni-7cf33c6a-e4df-7cd5-88aa-a9469028bba2" Nov 1 00:43:38.471699 env[1822]: 2025-11-01 00:43:38.407 [INFO][4340] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" iface="eth0" netns="/var/run/netns/cni-7cf33c6a-e4df-7cd5-88aa-a9469028bba2" Nov 1 00:43:38.471699 env[1822]: 2025-11-01 00:43:38.408 [INFO][4340] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" iface="eth0" netns="/var/run/netns/cni-7cf33c6a-e4df-7cd5-88aa-a9469028bba2" Nov 1 00:43:38.471699 env[1822]: 2025-11-01 00:43:38.408 [INFO][4340] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" Nov 1 00:43:38.471699 env[1822]: 2025-11-01 00:43:38.408 [INFO][4340] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" Nov 1 00:43:38.471699 env[1822]: 2025-11-01 00:43:38.449 [INFO][4347] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" HandleID="k8s-pod-network.184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0" Nov 1 00:43:38.471699 env[1822]: 2025-11-01 00:43:38.449 [INFO][4347] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:38.471699 env[1822]: 2025-11-01 00:43:38.449 [INFO][4347] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:38.471699 env[1822]: 2025-11-01 00:43:38.462 [WARNING][4347] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" HandleID="k8s-pod-network.184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0" Nov 1 00:43:38.471699 env[1822]: 2025-11-01 00:43:38.462 [INFO][4347] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" HandleID="k8s-pod-network.184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0" Nov 1 00:43:38.471699 env[1822]: 2025-11-01 00:43:38.465 [INFO][4347] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:38.471699 env[1822]: 2025-11-01 00:43:38.468 [INFO][4340] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" Nov 1 00:43:38.477107 systemd[1]: run-netns-cni\x2d7cf33c6a\x2de4df\x2d7cd5\x2d88aa\x2da9469028bba2.mount: Deactivated successfully. Nov 1 00:43:38.480555 env[1822]: time="2025-11-01T00:43:38.480511640Z" level=info msg="TearDown network for sandbox \"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1\" successfully" Nov 1 00:43:38.480717 env[1822]: time="2025-11-01T00:43:38.480693662Z" level=info msg="StopPodSandbox for \"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1\" returns successfully" Nov 1 00:43:38.481581 env[1822]: time="2025-11-01T00:43:38.481548508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k7np4,Uid:9d1a8da9-b760-4438-8153-c39262d29176,Namespace:kube-system,Attempt:1,}" Nov 1 00:43:38.674275 systemd-networkd[1497]: caliebcbe53047e: Link UP Nov 1 00:43:38.679555 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:43:38.679683 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliebcbe53047e: link becomes ready Nov 1 00:43:38.680870 systemd-networkd[1497]: caliebcbe53047e: Gained carrier Nov 1 00:43:38.717529 env[1822]: 2025-11-01 00:43:38.552 [INFO][4357] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0 coredns-668d6bf9bc- kube-system 9d1a8da9-b760-4438-8153-c39262d29176 936 0 2025-11-01 00:42:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-28 coredns-668d6bf9bc-k7np4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliebcbe53047e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90" Namespace="kube-system" Pod="coredns-668d6bf9bc-k7np4" WorkloadEndpoint="ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-" Nov 1 00:43:38.717529 env[1822]: 2025-11-01 00:43:38.552 [INFO][4357] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90" Namespace="kube-system" Pod="coredns-668d6bf9bc-k7np4" WorkloadEndpoint="ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0" Nov 1 00:43:38.717529 env[1822]: 2025-11-01 00:43:38.596 [INFO][4369] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90" HandleID="k8s-pod-network.6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0" Nov 1 00:43:38.717529 env[1822]: 2025-11-01 00:43:38.596 [INFO][4369] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90" HandleID="k8s-pod-network.6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b73a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-28", "pod":"coredns-668d6bf9bc-k7np4", "timestamp":"2025-11-01 00:43:38.596277185 +0000 UTC"}, Hostname:"ip-172-31-19-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:43:38.717529 env[1822]: 2025-11-01 00:43:38.597 [INFO][4369] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:38.717529 env[1822]: 2025-11-01 00:43:38.597 [INFO][4369] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:38.717529 env[1822]: 2025-11-01 00:43:38.597 [INFO][4369] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-28' Nov 1 00:43:38.717529 env[1822]: 2025-11-01 00:43:38.608 [INFO][4369] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90" host="ip-172-31-19-28" Nov 1 00:43:38.717529 env[1822]: 2025-11-01 00:43:38.616 [INFO][4369] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-28" Nov 1 00:43:38.717529 env[1822]: 2025-11-01 00:43:38.631 [INFO][4369] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:38.717529 env[1822]: 2025-11-01 00:43:38.636 [INFO][4369] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:38.717529 env[1822]: 2025-11-01 00:43:38.640 [INFO][4369] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:38.717529 env[1822]: 2025-11-01 00:43:38.640 [INFO][4369] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90" host="ip-172-31-19-28" Nov 1 00:43:38.717529 env[1822]: 2025-11-01 00:43:38.642 [INFO][4369] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90 Nov 1 00:43:38.717529 env[1822]: 2025-11-01 00:43:38.648 [INFO][4369] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90" host="ip-172-31-19-28" Nov 1 00:43:38.717529 env[1822]: 2025-11-01 00:43:38.660 [INFO][4369] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.66/26] block=192.168.50.64/26 handle="k8s-pod-network.6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90" host="ip-172-31-19-28" Nov 1 00:43:38.717529 env[1822]: 2025-11-01 00:43:38.660 [INFO][4369] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.66/26] handle="k8s-pod-network.6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90" host="ip-172-31-19-28" Nov 1 00:43:38.717529 env[1822]: 2025-11-01 00:43:38.660 [INFO][4369] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:38.717529 env[1822]: 2025-11-01 00:43:38.660 [INFO][4369] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.66/26] IPv6=[] ContainerID="6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90" HandleID="k8s-pod-network.6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0" Nov 1 00:43:38.718432 env[1822]: 2025-11-01 00:43:38.663 [INFO][4357] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90" Namespace="kube-system" Pod="coredns-668d6bf9bc-k7np4" WorkloadEndpoint="ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9d1a8da9-b760-4438-8153-c39262d29176", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 42, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"", Pod:"coredns-668d6bf9bc-k7np4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebcbe53047e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:38.718432 env[1822]: 2025-11-01 00:43:38.665 [INFO][4357] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.66/32] ContainerID="6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90" Namespace="kube-system" Pod="coredns-668d6bf9bc-k7np4" WorkloadEndpoint="ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0" Nov 1 00:43:38.718432 env[1822]: 2025-11-01 00:43:38.665 [INFO][4357] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliebcbe53047e ContainerID="6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90" Namespace="kube-system" Pod="coredns-668d6bf9bc-k7np4" WorkloadEndpoint="ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0" Nov 1 00:43:38.718432 env[1822]: 2025-11-01 00:43:38.694 [INFO][4357] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90" Namespace="kube-system" Pod="coredns-668d6bf9bc-k7np4" WorkloadEndpoint="ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0" Nov 1 00:43:38.718432 env[1822]: 2025-11-01 00:43:38.695 [INFO][4357] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90" Namespace="kube-system" Pod="coredns-668d6bf9bc-k7np4" WorkloadEndpoint="ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9d1a8da9-b760-4438-8153-c39262d29176", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 42, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90", Pod:"coredns-668d6bf9bc-k7np4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebcbe53047e", MAC:"a2:2f:13:52:a3:49", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:38.718432 env[1822]: 2025-11-01 00:43:38.710 [INFO][4357] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90" Namespace="kube-system" Pod="coredns-668d6bf9bc-k7np4" WorkloadEndpoint="ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0" Nov 1 00:43:38.743175 kubelet[2748]: E1101 00:43:38.742944 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78c5cf549b-vmbkl" podUID="3c464f2f-5c33-4de3-9c8e-29f197089e35" Nov 1 00:43:38.748622 env[1822]: time="2025-11-01T00:43:38.748547914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:38.748825 env[1822]: time="2025-11-01T00:43:38.748802683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:38.748915 env[1822]: time="2025-11-01T00:43:38.748897772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:38.749324 env[1822]: time="2025-11-01T00:43:38.749268499Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90 pid=4394 runtime=io.containerd.runc.v2 Nov 1 00:43:38.819000 audit[4417]: NETFILTER_CFG table=filter:110 family=2 entries=42 op=nft_register_chain pid=4417 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:43:38.819000 audit[4417]: SYSCALL arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7ffd984a4400 a2=0 a3=7ffd984a43ec items=0 ppid=4079 pid=4417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:38.819000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:43:38.865000 audit[4432]: NETFILTER_CFG table=filter:111 family=2 entries=20 op=nft_register_rule pid=4432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:38.865000 audit[4432]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffedd4dd790 a2=0 a3=7ffedd4dd77c items=0 ppid=2891 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:38.865000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:38.870699 env[1822]: time="2025-11-01T00:43:38.870643540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k7np4,Uid:9d1a8da9-b760-4438-8153-c39262d29176,Namespace:kube-system,Attempt:1,} returns sandbox id \"6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90\"" Nov 1 00:43:38.870000 audit[4432]: NETFILTER_CFG table=nat:112 family=2 entries=14 op=nft_register_rule pid=4432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:38.870000 audit[4432]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffedd4dd790 a2=0 a3=0 items=0 ppid=2891 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:38.870000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:38.874554 env[1822]: time="2025-11-01T00:43:38.874094077Z" level=info msg="CreateContainer within sandbox \"6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:43:39.003713 env[1822]: time="2025-11-01T00:43:39.003615555Z" level=info msg="CreateContainer within sandbox \"6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d1b2b067977cac501d6b638b30d751c20b13ea62869282b08a96a8f80bf4d4b1\"" Nov 1 00:43:39.004520 env[1822]: time="2025-11-01T00:43:39.004495151Z" level=info msg="StartContainer for \"d1b2b067977cac501d6b638b30d751c20b13ea62869282b08a96a8f80bf4d4b1\"" Nov 1 00:43:39.135095 env[1822]: time="2025-11-01T00:43:39.135030731Z" level=info msg="StartContainer for \"d1b2b067977cac501d6b638b30d751c20b13ea62869282b08a96a8f80bf4d4b1\" returns successfully" Nov 1 00:43:39.161105 systemd-networkd[1497]: cali53aff5cf812: Gained IPv6LL Nov 1 00:43:39.354424 env[1822]: time="2025-11-01T00:43:39.354376023Z" level=info msg="StopPodSandbox for \"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6\"" Nov 1 00:43:39.354882 env[1822]: time="2025-11-01T00:43:39.354815706Z" level=info msg="StopPodSandbox for \"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c\"" Nov 1 00:43:39.356912 env[1822]: time="2025-11-01T00:43:39.355447147Z" level=info msg="StopPodSandbox for \"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0\"" Nov 1 00:43:39.358546 env[1822]: time="2025-11-01T00:43:39.357779650Z" level=info msg="StopPodSandbox for \"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20\"" Nov 1 00:43:39.381293 sshd[4325]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:39.382000 audit[4325]: USER_END pid=4325 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:39.382000 audit[4325]: CRED_DISP pid=4325 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:39.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.19.28:22-147.75.109.163:51916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:39.385748 systemd[1]: sshd@7-172.31.19.28:22-147.75.109.163:51916.service: Deactivated successfully. Nov 1 00:43:39.386916 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:43:39.394429 systemd-logind[1803]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:43:39.397191 systemd-logind[1803]: Removed session 8. Nov 1 00:43:39.696517 env[1822]: 2025-11-01 00:43:39.495 [INFO][4489] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" Nov 1 00:43:39.696517 env[1822]: 2025-11-01 00:43:39.495 [INFO][4489] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" iface="eth0" netns="/var/run/netns/cni-13b0d134-215c-0444-bd7b-2f8ddd87c713" Nov 1 00:43:39.696517 env[1822]: 2025-11-01 00:43:39.495 [INFO][4489] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" iface="eth0" netns="/var/run/netns/cni-13b0d134-215c-0444-bd7b-2f8ddd87c713" Nov 1 00:43:39.696517 env[1822]: 2025-11-01 00:43:39.495 [INFO][4489] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" iface="eth0" netns="/var/run/netns/cni-13b0d134-215c-0444-bd7b-2f8ddd87c713" Nov 1 00:43:39.696517 env[1822]: 2025-11-01 00:43:39.495 [INFO][4489] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" Nov 1 00:43:39.696517 env[1822]: 2025-11-01 00:43:39.495 [INFO][4489] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" Nov 1 00:43:39.696517 env[1822]: 2025-11-01 00:43:39.648 [INFO][4531] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" HandleID="k8s-pod-network.f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0" Nov 1 00:43:39.696517 env[1822]: 2025-11-01 00:43:39.648 [INFO][4531] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:39.696517 env[1822]: 2025-11-01 00:43:39.649 [INFO][4531] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:39.696517 env[1822]: 2025-11-01 00:43:39.681 [WARNING][4531] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" HandleID="k8s-pod-network.f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0" Nov 1 00:43:39.696517 env[1822]: 2025-11-01 00:43:39.681 [INFO][4531] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" HandleID="k8s-pod-network.f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0" Nov 1 00:43:39.696517 env[1822]: 2025-11-01 00:43:39.687 [INFO][4531] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:39.696517 env[1822]: 2025-11-01 00:43:39.693 [INFO][4489] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" Nov 1 00:43:39.698946 env[1822]: time="2025-11-01T00:43:39.697047061Z" level=info msg="TearDown network for sandbox \"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c\" successfully" Nov 1 00:43:39.698946 env[1822]: time="2025-11-01T00:43:39.697152342Z" level=info msg="StopPodSandbox for \"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c\" returns successfully" Nov 1 00:43:39.701044 systemd[1]: run-netns-cni\x2d13b0d134\x2d215c\x2d0444\x2dbd7b\x2d2f8ddd87c713.mount: Deactivated successfully. Nov 1 00:43:39.708394 env[1822]: time="2025-11-01T00:43:39.706835932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cc86985f-qb2p9,Uid:6d3c3149-9beb-44f8-a7ee-d6982872dcbb,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:43:39.764176 env[1822]: 2025-11-01 00:43:39.583 [INFO][4516] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" Nov 1 00:43:39.764176 env[1822]: 2025-11-01 00:43:39.583 [INFO][4516] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" iface="eth0" netns="/var/run/netns/cni-0dab9853-601d-8f17-610f-7a5ba0c1a478" Nov 1 00:43:39.764176 env[1822]: 2025-11-01 00:43:39.583 [INFO][4516] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" iface="eth0" netns="/var/run/netns/cni-0dab9853-601d-8f17-610f-7a5ba0c1a478" Nov 1 00:43:39.764176 env[1822]: 2025-11-01 00:43:39.583 [INFO][4516] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" iface="eth0" netns="/var/run/netns/cni-0dab9853-601d-8f17-610f-7a5ba0c1a478" Nov 1 00:43:39.764176 env[1822]: 2025-11-01 00:43:39.583 [INFO][4516] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" Nov 1 00:43:39.764176 env[1822]: 2025-11-01 00:43:39.583 [INFO][4516] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" Nov 1 00:43:39.764176 env[1822]: 2025-11-01 00:43:39.652 [INFO][4549] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" HandleID="k8s-pod-network.3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" Workload="ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0" Nov 1 00:43:39.764176 env[1822]: 2025-11-01 00:43:39.653 [INFO][4549] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:39.764176 env[1822]: 2025-11-01 00:43:39.687 [INFO][4549] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:39.764176 env[1822]: 2025-11-01 00:43:39.707 [WARNING][4549] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" HandleID="k8s-pod-network.3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" Workload="ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0" Nov 1 00:43:39.764176 env[1822]: 2025-11-01 00:43:39.707 [INFO][4549] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" HandleID="k8s-pod-network.3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" Workload="ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0" Nov 1 00:43:39.764176 env[1822]: 2025-11-01 00:43:39.714 [INFO][4549] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:39.764176 env[1822]: 2025-11-01 00:43:39.718 [INFO][4516] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" Nov 1 00:43:39.764176 env[1822]: time="2025-11-01T00:43:39.761901531Z" level=info msg="TearDown network for sandbox \"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20\" successfully" Nov 1 00:43:39.764176 env[1822]: time="2025-11-01T00:43:39.761949157Z" level=info msg="StopPodSandbox for \"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20\" returns successfully" Nov 1 00:43:39.764176 env[1822]: time="2025-11-01T00:43:39.763183791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5lqpx,Uid:9a6bbdac-9f73-4cc6-aadc-84424d8082ea,Namespace:calico-system,Attempt:1,}" Nov 1 00:43:39.760980 systemd[1]: run-netns-cni\x2d0dab9853\x2d601d\x2d8f17\x2d610f\x2d7a5ba0c1a478.mount: Deactivated successfully. Nov 1 00:43:39.774184 env[1822]: 2025-11-01 00:43:39.520 [INFO][4511] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" Nov 1 00:43:39.774184 env[1822]: 2025-11-01 00:43:39.521 [INFO][4511] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" iface="eth0" netns="/var/run/netns/cni-d2066ba9-1d59-5853-aa46-8fca18fbca94" Nov 1 00:43:39.774184 env[1822]: 2025-11-01 00:43:39.521 [INFO][4511] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" iface="eth0" netns="/var/run/netns/cni-d2066ba9-1d59-5853-aa46-8fca18fbca94" Nov 1 00:43:39.774184 env[1822]: 2025-11-01 00:43:39.522 [INFO][4511] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" iface="eth0" netns="/var/run/netns/cni-d2066ba9-1d59-5853-aa46-8fca18fbca94" Nov 1 00:43:39.774184 env[1822]: 2025-11-01 00:43:39.522 [INFO][4511] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" Nov 1 00:43:39.774184 env[1822]: 2025-11-01 00:43:39.522 [INFO][4511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" Nov 1 00:43:39.774184 env[1822]: 2025-11-01 00:43:39.671 [INFO][4533] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" HandleID="k8s-pod-network.6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" Workload="ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0" Nov 1 00:43:39.774184 env[1822]: 2025-11-01 00:43:39.676 [INFO][4533] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:39.774184 env[1822]: 2025-11-01 00:43:39.713 [INFO][4533] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:39.774184 env[1822]: 2025-11-01 00:43:39.736 [WARNING][4533] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" HandleID="k8s-pod-network.6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" Workload="ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0" Nov 1 00:43:39.774184 env[1822]: 2025-11-01 00:43:39.736 [INFO][4533] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" HandleID="k8s-pod-network.6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" Workload="ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0" Nov 1 00:43:39.774184 env[1822]: 2025-11-01 00:43:39.738 [INFO][4533] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:39.774184 env[1822]: 2025-11-01 00:43:39.744 [INFO][4511] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" Nov 1 00:43:39.774184 env[1822]: time="2025-11-01T00:43:39.771564022Z" level=info msg="TearDown network for sandbox \"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0\" successfully" Nov 1 00:43:39.774184 env[1822]: time="2025-11-01T00:43:39.771600666Z" level=info msg="StopPodSandbox for \"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0\" returns successfully" Nov 1 00:43:39.774184 env[1822]: time="2025-11-01T00:43:39.772289881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dn6kn,Uid:6cd31c79-d021-4671-b1b1-16d458644a79,Namespace:calico-system,Attempt:1,}" Nov 1 00:43:39.805765 env[1822]: 2025-11-01 00:43:39.555 [INFO][4512] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" Nov 1 00:43:39.805765 env[1822]: 2025-11-01 00:43:39.555 [INFO][4512] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" iface="eth0" netns="/var/run/netns/cni-58936de4-7ac9-1a2d-fcbc-a63fb1c4d050" Nov 1 00:43:39.805765 env[1822]: 2025-11-01 00:43:39.557 [INFO][4512] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" iface="eth0" netns="/var/run/netns/cni-58936de4-7ac9-1a2d-fcbc-a63fb1c4d050" Nov 1 00:43:39.805765 env[1822]: 2025-11-01 00:43:39.558 [INFO][4512] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" iface="eth0" netns="/var/run/netns/cni-58936de4-7ac9-1a2d-fcbc-a63fb1c4d050" Nov 1 00:43:39.805765 env[1822]: 2025-11-01 00:43:39.558 [INFO][4512] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" Nov 1 00:43:39.805765 env[1822]: 2025-11-01 00:43:39.558 [INFO][4512] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" Nov 1 00:43:39.805765 env[1822]: 2025-11-01 00:43:39.676 [INFO][4544] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" HandleID="k8s-pod-network.0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0" Nov 1 00:43:39.805765 env[1822]: 2025-11-01 00:43:39.677 [INFO][4544] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:39.805765 env[1822]: 2025-11-01 00:43:39.739 [INFO][4544] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:39.805765 env[1822]: 2025-11-01 00:43:39.770 [WARNING][4544] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" HandleID="k8s-pod-network.0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0" Nov 1 00:43:39.805765 env[1822]: 2025-11-01 00:43:39.770 [INFO][4544] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" HandleID="k8s-pod-network.0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0" Nov 1 00:43:39.805765 env[1822]: 2025-11-01 00:43:39.780 [INFO][4544] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:39.805765 env[1822]: 2025-11-01 00:43:39.789 [INFO][4512] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" Nov 1 00:43:39.807644 env[1822]: time="2025-11-01T00:43:39.806350733Z" level=info msg="TearDown network for sandbox \"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6\" successfully" Nov 1 00:43:39.807644 env[1822]: time="2025-11-01T00:43:39.806396018Z" level=info msg="StopPodSandbox for \"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6\" returns successfully" Nov 1 00:43:39.808778 env[1822]: time="2025-11-01T00:43:39.808738781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hj6zc,Uid:df0a1bba-62a6-45de-8540-a37de4852942,Namespace:kube-system,Attempt:1,}" Nov 1 00:43:39.827000 audit[4574]: NETFILTER_CFG table=filter:113 family=2 entries=20 op=nft_register_rule pid=4574 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:39.827000 audit[4574]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdd5aa0cd0 a2=0 a3=7ffdd5aa0cbc items=0 ppid=2891 pid=4574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:39.827000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:39.834000 audit[4574]: NETFILTER_CFG table=nat:114 family=2 entries=14 op=nft_register_rule pid=4574 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:39.834000 audit[4574]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdd5aa0cd0 a2=0 a3=0 items=0 ppid=2891 pid=4574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:39.834000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:40.125940 systemd-networkd[1497]: cali98e527ed158: Link UP Nov 1 00:43:40.128401 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:43:40.128484 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali98e527ed158: link becomes ready Nov 1 00:43:40.131215 systemd-networkd[1497]: cali98e527ed158: Gained carrier Nov 1 00:43:40.154198 kubelet[2748]: I1101 00:43:40.151770 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k7np4" podStartSLOduration=43.151742982 podStartE2EDuration="43.151742982s" podCreationTimestamp="2025-11-01 00:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:43:39.789416961 +0000 UTC m=+46.835189644" watchObservedRunningTime="2025-11-01 00:43:40.151742982 +0000 UTC m=+47.197515663" Nov 1 00:43:40.158775 env[1822]: 2025-11-01 00:43:39.907 [INFO][4560] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0 calico-apiserver-68cc86985f- calico-apiserver 6d3c3149-9beb-44f8-a7ee-d6982872dcbb 955 0 2025-11-01 00:43:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68cc86985f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-28 calico-apiserver-68cc86985f-qb2p9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali98e527ed158 [] [] }} ContainerID="30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2" Namespace="calico-apiserver" Pod="calico-apiserver-68cc86985f-qb2p9" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-" Nov 1 00:43:40.158775 env[1822]: 2025-11-01 00:43:39.908 [INFO][4560] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2" Namespace="calico-apiserver" Pod="calico-apiserver-68cc86985f-qb2p9" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0" Nov 1 00:43:40.158775 env[1822]: 2025-11-01 00:43:40.052 [INFO][4614] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2" HandleID="k8s-pod-network.30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0" Nov 1 00:43:40.158775 env[1822]: 2025-11-01 00:43:40.055 [INFO][4614] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2" HandleID="k8s-pod-network.30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122320), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-28", "pod":"calico-apiserver-68cc86985f-qb2p9", "timestamp":"2025-11-01 00:43:40.052014186 +0000 UTC"}, Hostname:"ip-172-31-19-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:43:40.158775 env[1822]: 2025-11-01 00:43:40.055 [INFO][4614] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:40.158775 env[1822]: 2025-11-01 00:43:40.055 [INFO][4614] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:40.158775 env[1822]: 2025-11-01 00:43:40.056 [INFO][4614] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-28' Nov 1 00:43:40.158775 env[1822]: 2025-11-01 00:43:40.066 [INFO][4614] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2" host="ip-172-31-19-28" Nov 1 00:43:40.158775 env[1822]: 2025-11-01 00:43:40.074 [INFO][4614] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-28" Nov 1 00:43:40.158775 env[1822]: 2025-11-01 00:43:40.081 [INFO][4614] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:40.158775 env[1822]: 2025-11-01 00:43:40.084 [INFO][4614] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:40.158775 env[1822]: 2025-11-01 00:43:40.088 [INFO][4614] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:40.158775 env[1822]: 2025-11-01 00:43:40.088 [INFO][4614] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2" host="ip-172-31-19-28" Nov 1 00:43:40.158775 env[1822]: 2025-11-01 00:43:40.092 [INFO][4614] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2 Nov 1 00:43:40.158775 env[1822]: 2025-11-01 00:43:40.100 [INFO][4614] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2" host="ip-172-31-19-28" Nov 1 00:43:40.158775 env[1822]: 2025-11-01 00:43:40.117 [INFO][4614] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.67/26] block=192.168.50.64/26 handle="k8s-pod-network.30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2" host="ip-172-31-19-28" Nov 1 00:43:40.158775 env[1822]: 2025-11-01 00:43:40.118 [INFO][4614] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.67/26] handle="k8s-pod-network.30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2" host="ip-172-31-19-28" Nov 1 00:43:40.158775 env[1822]: 2025-11-01 00:43:40.118 [INFO][4614] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:40.158775 env[1822]: 2025-11-01 00:43:40.118 [INFO][4614] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.67/26] IPv6=[] ContainerID="30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2" HandleID="k8s-pod-network.30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0" Nov 1 00:43:40.159449 env[1822]: 2025-11-01 00:43:40.121 [INFO][4560] cni-plugin/k8s.go 418: Populated endpoint ContainerID="30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2" Namespace="calico-apiserver" Pod="calico-apiserver-68cc86985f-qb2p9" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0", GenerateName:"calico-apiserver-68cc86985f-", Namespace:"calico-apiserver", SelfLink:"", UID:"6d3c3149-9beb-44f8-a7ee-d6982872dcbb", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cc86985f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"", Pod:"calico-apiserver-68cc86985f-qb2p9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali98e527ed158", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:40.159449 env[1822]: 2025-11-01 00:43:40.121 [INFO][4560] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.67/32] ContainerID="30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2" Namespace="calico-apiserver" Pod="calico-apiserver-68cc86985f-qb2p9" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0" Nov 1 00:43:40.159449 env[1822]: 2025-11-01 00:43:40.122 [INFO][4560] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali98e527ed158 ContainerID="30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2" Namespace="calico-apiserver" Pod="calico-apiserver-68cc86985f-qb2p9" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0" Nov 1 00:43:40.159449 env[1822]: 2025-11-01 00:43:40.125 [INFO][4560] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2" Namespace="calico-apiserver" Pod="calico-apiserver-68cc86985f-qb2p9" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0" Nov 1 00:43:40.159449 env[1822]: 2025-11-01 00:43:40.131 [INFO][4560] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2" Namespace="calico-apiserver" Pod="calico-apiserver-68cc86985f-qb2p9" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0", GenerateName:"calico-apiserver-68cc86985f-", Namespace:"calico-apiserver", SelfLink:"", UID:"6d3c3149-9beb-44f8-a7ee-d6982872dcbb", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cc86985f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2", Pod:"calico-apiserver-68cc86985f-qb2p9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali98e527ed158", MAC:"ce:c0:82:17:4d:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:40.159449 env[1822]: 2025-11-01 00:43:40.154 [INFO][4560] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2" Namespace="calico-apiserver" Pod="calico-apiserver-68cc86985f-qb2p9" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0" Nov 1 00:43:40.186049 env[1822]: time="2025-11-01T00:43:40.185911875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:40.186534 env[1822]: time="2025-11-01T00:43:40.186487868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:40.186707 env[1822]: time="2025-11-01T00:43:40.186669320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:40.187223 env[1822]: time="2025-11-01T00:43:40.187156740Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2 pid=4658 runtime=io.containerd.runc.v2 Nov 1 00:43:40.248371 systemd-networkd[1497]: calie2a264c2bbd: Link UP Nov 1 00:43:40.251358 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie2a264c2bbd: link becomes ready Nov 1 00:43:40.251703 systemd-networkd[1497]: calie2a264c2bbd: Gained carrier Nov 1 00:43:40.280141 env[1822]: 2025-11-01 00:43:39.885 [INFO][4573] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0 csi-node-driver- calico-system 9a6bbdac-9f73-4cc6-aadc-84424d8082ea 958 0 2025-11-01 00:43:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-19-28 csi-node-driver-5lqpx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie2a264c2bbd [] [] }} ContainerID="120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77" Namespace="calico-system" Pod="csi-node-driver-5lqpx" WorkloadEndpoint="ip--172--31--19--28-k8s-csi--node--driver--5lqpx-" Nov 1 00:43:40.280141 env[1822]: 2025-11-01 00:43:39.886 [INFO][4573] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77" Namespace="calico-system" Pod="csi-node-driver-5lqpx" WorkloadEndpoint="ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0" Nov 1 00:43:40.280141 env[1822]: 2025-11-01 00:43:40.070 [INFO][4609] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77" HandleID="k8s-pod-network.120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77" Workload="ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0" Nov 1 00:43:40.280141 env[1822]: 2025-11-01 00:43:40.072 [INFO][4609] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77" HandleID="k8s-pod-network.120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77" Workload="ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00046e210), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-28", "pod":"csi-node-driver-5lqpx", "timestamp":"2025-11-01 00:43:40.070868857 +0000 UTC"}, Hostname:"ip-172-31-19-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:43:40.280141 env[1822]: 2025-11-01 00:43:40.072 [INFO][4609] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:40.280141 env[1822]: 2025-11-01 00:43:40.118 [INFO][4609] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:40.280141 env[1822]: 2025-11-01 00:43:40.118 [INFO][4609] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-28' Nov 1 00:43:40.280141 env[1822]: 2025-11-01 00:43:40.171 [INFO][4609] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77" host="ip-172-31-19-28" Nov 1 00:43:40.280141 env[1822]: 2025-11-01 00:43:40.177 [INFO][4609] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-28" Nov 1 00:43:40.280141 env[1822]: 2025-11-01 00:43:40.183 [INFO][4609] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:40.280141 env[1822]: 2025-11-01 00:43:40.186 [INFO][4609] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:40.280141 env[1822]: 2025-11-01 00:43:40.190 [INFO][4609] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:40.280141 env[1822]: 2025-11-01 00:43:40.190 [INFO][4609] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77" host="ip-172-31-19-28" Nov 1 00:43:40.280141 env[1822]: 2025-11-01 00:43:40.192 [INFO][4609] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77 Nov 1 00:43:40.280141 env[1822]: 2025-11-01 00:43:40.201 [INFO][4609] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77" host="ip-172-31-19-28" Nov 1 00:43:40.280141 env[1822]: 2025-11-01 00:43:40.218 [INFO][4609] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.68/26] block=192.168.50.64/26 handle="k8s-pod-network.120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77" host="ip-172-31-19-28" Nov 1 00:43:40.280141 env[1822]: 2025-11-01 00:43:40.218 [INFO][4609] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.68/26] handle="k8s-pod-network.120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77" host="ip-172-31-19-28" Nov 1 00:43:40.280141 env[1822]: 2025-11-01 00:43:40.220 [INFO][4609] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:40.280141 env[1822]: 2025-11-01 00:43:40.220 [INFO][4609] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.68/26] IPv6=[] ContainerID="120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77" HandleID="k8s-pod-network.120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77" Workload="ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0" Nov 1 00:43:40.281327 env[1822]: 2025-11-01 00:43:40.241 [INFO][4573] cni-plugin/k8s.go 418: Populated endpoint ContainerID="120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77" Namespace="calico-system" Pod="csi-node-driver-5lqpx" WorkloadEndpoint="ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9a6bbdac-9f73-4cc6-aadc-84424d8082ea", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"", Pod:"csi-node-driver-5lqpx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2a264c2bbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:40.281327 env[1822]: 2025-11-01 00:43:40.242 [INFO][4573] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.68/32] ContainerID="120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77" Namespace="calico-system" Pod="csi-node-driver-5lqpx" WorkloadEndpoint="ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0" Nov 1 00:43:40.281327 env[1822]: 2025-11-01 00:43:40.242 [INFO][4573] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2a264c2bbd ContainerID="120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77" Namespace="calico-system" Pod="csi-node-driver-5lqpx" WorkloadEndpoint="ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0" Nov 1 00:43:40.281327 env[1822]: 2025-11-01 00:43:40.249 [INFO][4573] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77" Namespace="calico-system" Pod="csi-node-driver-5lqpx" WorkloadEndpoint="ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0" Nov 1 00:43:40.281327 env[1822]: 2025-11-01 00:43:40.251 [INFO][4573] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77" Namespace="calico-system" Pod="csi-node-driver-5lqpx" WorkloadEndpoint="ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9a6bbdac-9f73-4cc6-aadc-84424d8082ea", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77", Pod:"csi-node-driver-5lqpx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2a264c2bbd", MAC:"d2:20:18:00:f7:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:40.281327 env[1822]: 2025-11-01 00:43:40.273 [INFO][4573] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77" Namespace="calico-system" Pod="csi-node-driver-5lqpx" WorkloadEndpoint="ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0" Nov 1 00:43:40.312560 systemd-networkd[1497]: caliebcbe53047e: Gained IPv6LL Nov 1 00:43:40.325542 env[1822]: time="2025-11-01T00:43:40.325410350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:40.325837 env[1822]: time="2025-11-01T00:43:40.325801170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:40.326556 env[1822]: time="2025-11-01T00:43:40.326493011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:40.327374 env[1822]: time="2025-11-01T00:43:40.327302933Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77 pid=4704 runtime=io.containerd.runc.v2 Nov 1 00:43:40.349485 env[1822]: time="2025-11-01T00:43:40.349444578Z" level=info msg="StopPodSandbox for \"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853\"" Nov 1 00:43:40.351413 env[1822]: time="2025-11-01T00:43:40.350504087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cc86985f-qb2p9,Uid:6d3c3149-9beb-44f8-a7ee-d6982872dcbb,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2\"" Nov 1 00:43:40.355406 env[1822]: time="2025-11-01T00:43:40.355369515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:43:40.371509 systemd-networkd[1497]: calib7f7db156e6: Link UP Nov 1 00:43:40.375583 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib7f7db156e6: link becomes ready Nov 1 00:43:40.375924 systemd-networkd[1497]: calib7f7db156e6: Gained carrier Nov 1 00:43:40.413398 env[1822]: 2025-11-01 00:43:39.992 [INFO][4584] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0 coredns-668d6bf9bc- kube-system df0a1bba-62a6-45de-8540-a37de4852942 957 0 2025-11-01 00:42:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-28 coredns-668d6bf9bc-hj6zc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib7f7db156e6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162" Namespace="kube-system" Pod="coredns-668d6bf9bc-hj6zc" WorkloadEndpoint="ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-" Nov 1 00:43:40.413398 env[1822]: 2025-11-01 00:43:39.992 [INFO][4584] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162" Namespace="kube-system" Pod="coredns-668d6bf9bc-hj6zc" WorkloadEndpoint="ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0" Nov 1 00:43:40.413398 env[1822]: 2025-11-01 00:43:40.097 [INFO][4629] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162" HandleID="k8s-pod-network.9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0" Nov 1 00:43:40.413398 env[1822]: 2025-11-01 00:43:40.098 [INFO][4629] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162" HandleID="k8s-pod-network.9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034faa0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-28", "pod":"coredns-668d6bf9bc-hj6zc", "timestamp":"2025-11-01 00:43:40.097901933 +0000 UTC"}, Hostname:"ip-172-31-19-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:43:40.413398 env[1822]: 2025-11-01 00:43:40.098 [INFO][4629] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:40.413398 env[1822]: 2025-11-01 00:43:40.219 [INFO][4629] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:40.413398 env[1822]: 2025-11-01 00:43:40.220 [INFO][4629] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-28' Nov 1 00:43:40.413398 env[1822]: 2025-11-01 00:43:40.271 [INFO][4629] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162" host="ip-172-31-19-28" Nov 1 00:43:40.413398 env[1822]: 2025-11-01 00:43:40.289 [INFO][4629] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-28" Nov 1 00:43:40.413398 env[1822]: 2025-11-01 00:43:40.301 [INFO][4629] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:40.413398 env[1822]: 2025-11-01 00:43:40.307 [INFO][4629] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:40.413398 env[1822]: 2025-11-01 00:43:40.316 [INFO][4629] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:40.413398 env[1822]: 2025-11-01 00:43:40.316 [INFO][4629] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162" host="ip-172-31-19-28" Nov 1 00:43:40.413398 env[1822]: 2025-11-01 00:43:40.319 [INFO][4629] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162 Nov 1 00:43:40.413398 env[1822]: 2025-11-01 00:43:40.326 [INFO][4629] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162" host="ip-172-31-19-28" Nov 1 00:43:40.413398 env[1822]: 2025-11-01 00:43:40.339 [INFO][4629] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.69/26] block=192.168.50.64/26 handle="k8s-pod-network.9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162" host="ip-172-31-19-28" Nov 1 00:43:40.413398 env[1822]: 2025-11-01 00:43:40.339 [INFO][4629] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.69/26] handle="k8s-pod-network.9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162" host="ip-172-31-19-28" Nov 1 00:43:40.413398 env[1822]: 2025-11-01 00:43:40.345 [INFO][4629] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:40.413398 env[1822]: 2025-11-01 00:43:40.346 [INFO][4629] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.69/26] IPv6=[] ContainerID="9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162" HandleID="k8s-pod-network.9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0" Nov 1 00:43:40.414369 env[1822]: 2025-11-01 00:43:40.357 [INFO][4584] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162" Namespace="kube-system" Pod="coredns-668d6bf9bc-hj6zc" WorkloadEndpoint="ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"df0a1bba-62a6-45de-8540-a37de4852942", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 42, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"", Pod:"coredns-668d6bf9bc-hj6zc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7f7db156e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:40.414369 env[1822]: 2025-11-01 00:43:40.358 [INFO][4584] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.69/32] ContainerID="9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162" Namespace="kube-system" Pod="coredns-668d6bf9bc-hj6zc" WorkloadEndpoint="ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0" Nov 1 00:43:40.414369 env[1822]: 2025-11-01 00:43:40.358 [INFO][4584] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7f7db156e6 ContainerID="9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162" Namespace="kube-system" Pod="coredns-668d6bf9bc-hj6zc" WorkloadEndpoint="ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0" Nov 1 00:43:40.414369 env[1822]: 2025-11-01 00:43:40.390 [INFO][4584] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162" Namespace="kube-system" Pod="coredns-668d6bf9bc-hj6zc" WorkloadEndpoint="ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0" Nov 1 00:43:40.414369 env[1822]: 2025-11-01 00:43:40.391 [INFO][4584] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162" Namespace="kube-system" Pod="coredns-668d6bf9bc-hj6zc" WorkloadEndpoint="ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"df0a1bba-62a6-45de-8540-a37de4852942", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 42, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162", Pod:"coredns-668d6bf9bc-hj6zc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7f7db156e6", MAC:"86:0c:fb:96:25:9d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:40.414369 env[1822]: 2025-11-01 00:43:40.405 [INFO][4584] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162" Namespace="kube-system" Pod="coredns-668d6bf9bc-hj6zc" WorkloadEndpoint="ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0" Nov 1 00:43:40.459329 env[1822]: time="2025-11-01T00:43:40.458260446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5lqpx,Uid:9a6bbdac-9f73-4cc6-aadc-84424d8082ea,Namespace:calico-system,Attempt:1,} returns sandbox id \"120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77\"" Nov 1 00:43:40.485497 systemd-networkd[1497]: cali1c5b8baa747: Link UP Nov 1 00:43:40.490246 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1c5b8baa747: link becomes ready Nov 1 00:43:40.488596 systemd-networkd[1497]: cali1c5b8baa747: Gained carrier Nov 1 00:43:40.495266 systemd[1]: run-netns-cni\x2d58936de4\x2d7ac9\x2d1a2d\x2dfcbc\x2da63fb1c4d050.mount: Deactivated successfully. Nov 1 00:43:40.495539 systemd[1]: run-netns-cni\x2dd2066ba9\x2d1d59\x2d5853\x2daa46\x2d8fca18fbca94.mount: Deactivated successfully. Nov 1 00:43:40.524749 env[1822]: 2025-11-01 00:43:40.037 [INFO][4593] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0 goldmane-666569f655- calico-system 6cd31c79-d021-4671-b1b1-16d458644a79 956 0 2025-11-01 00:43:14 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-19-28 goldmane-666569f655-dn6kn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1c5b8baa747 [] [] }} ContainerID="2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311" Namespace="calico-system" Pod="goldmane-666569f655-dn6kn" WorkloadEndpoint="ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-" Nov 1 00:43:40.524749 env[1822]: 2025-11-01 00:43:40.037 [INFO][4593] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311" Namespace="calico-system" Pod="goldmane-666569f655-dn6kn" WorkloadEndpoint="ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0" Nov 1 00:43:40.524749 env[1822]: 2025-11-01 00:43:40.128 [INFO][4635] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311" HandleID="k8s-pod-network.2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311" Workload="ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0" Nov 1 00:43:40.524749 env[1822]: 2025-11-01 00:43:40.129 [INFO][4635] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311" HandleID="k8s-pod-network.2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311" Workload="ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-28", "pod":"goldmane-666569f655-dn6kn", "timestamp":"2025-11-01 00:43:40.128627418 +0000 UTC"}, Hostname:"ip-172-31-19-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:43:40.524749 env[1822]: 2025-11-01 00:43:40.129 [INFO][4635] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:40.524749 env[1822]: 2025-11-01 00:43:40.339 [INFO][4635] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:40.524749 env[1822]: 2025-11-01 00:43:40.339 [INFO][4635] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-28' Nov 1 00:43:40.524749 env[1822]: 2025-11-01 00:43:40.370 [INFO][4635] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311" host="ip-172-31-19-28" Nov 1 00:43:40.524749 env[1822]: 2025-11-01 00:43:40.392 [INFO][4635] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-28" Nov 1 00:43:40.524749 env[1822]: 2025-11-01 00:43:40.426 [INFO][4635] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:40.524749 env[1822]: 2025-11-01 00:43:40.431 [INFO][4635] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:40.524749 env[1822]: 2025-11-01 00:43:40.440 [INFO][4635] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:40.524749 env[1822]: 2025-11-01 00:43:40.440 [INFO][4635] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311" host="ip-172-31-19-28" Nov 1 00:43:40.524749 env[1822]: 2025-11-01 00:43:40.442 [INFO][4635] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311 Nov 1 00:43:40.524749 env[1822]: 2025-11-01 00:43:40.449 [INFO][4635] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311" host="ip-172-31-19-28" Nov 1 00:43:40.524749 env[1822]: 2025-11-01 00:43:40.460 [INFO][4635] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.70/26] block=192.168.50.64/26 handle="k8s-pod-network.2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311" host="ip-172-31-19-28" Nov 1 00:43:40.524749 env[1822]: 2025-11-01 00:43:40.464 [INFO][4635] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.70/26] handle="k8s-pod-network.2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311" host="ip-172-31-19-28" Nov 1 00:43:40.524749 env[1822]: 2025-11-01 00:43:40.464 [INFO][4635] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:40.524749 env[1822]: 2025-11-01 00:43:40.464 [INFO][4635] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.70/26] IPv6=[] ContainerID="2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311" HandleID="k8s-pod-network.2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311" Workload="ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0" Nov 1 00:43:40.525905 env[1822]: 2025-11-01 00:43:40.468 [INFO][4593] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311" Namespace="calico-system" Pod="goldmane-666569f655-dn6kn" WorkloadEndpoint="ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6cd31c79-d021-4671-b1b1-16d458644a79", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"", Pod:"goldmane-666569f655-dn6kn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1c5b8baa747", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:40.525905 env[1822]: 2025-11-01 00:43:40.468 [INFO][4593] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.70/32] ContainerID="2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311" Namespace="calico-system" Pod="goldmane-666569f655-dn6kn" WorkloadEndpoint="ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0" Nov 1 00:43:40.525905 env[1822]: 2025-11-01 00:43:40.468 [INFO][4593] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c5b8baa747 ContainerID="2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311" Namespace="calico-system" Pod="goldmane-666569f655-dn6kn" WorkloadEndpoint="ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0" Nov 1 00:43:40.525905 env[1822]: 2025-11-01 00:43:40.489 [INFO][4593] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311" Namespace="calico-system" Pod="goldmane-666569f655-dn6kn" WorkloadEndpoint="ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0" Nov 1 00:43:40.525905 env[1822]: 2025-11-01 00:43:40.490 [INFO][4593] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311" Namespace="calico-system" Pod="goldmane-666569f655-dn6kn" WorkloadEndpoint="ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6cd31c79-d021-4671-b1b1-16d458644a79", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311", Pod:"goldmane-666569f655-dn6kn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1c5b8baa747", MAC:"3e:73:26:e0:a4:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:40.525905 env[1822]: 2025-11-01 00:43:40.511 [INFO][4593] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311" Namespace="calico-system" Pod="goldmane-666569f655-dn6kn" WorkloadEndpoint="ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0" Nov 1 00:43:40.547741 env[1822]: time="2025-11-01T00:43:40.547640356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:40.547741 env[1822]: time="2025-11-01T00:43:40.547706935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:40.547982 env[1822]: time="2025-11-01T00:43:40.547722879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:40.549059 env[1822]: time="2025-11-01T00:43:40.548967105Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311 pid=4786 runtime=io.containerd.runc.v2 Nov 1 00:43:40.552792 env[1822]: time="2025-11-01T00:43:40.552701924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:40.554483 env[1822]: time="2025-11-01T00:43:40.554380927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:40.554483 env[1822]: time="2025-11-01T00:43:40.554460265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:40.566608 env[1822]: time="2025-11-01T00:43:40.555010466Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162 pid=4785 runtime=io.containerd.runc.v2 Nov 1 00:43:40.596979 kernel: kauditd_printk_skb: 576 callbacks suppressed Nov 1 00:43:40.598542 kernel: audit: type=1325 audit(1761957820.574:422): table=filter:115 family=2 entries=54 op=nft_register_chain pid=4807 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:43:40.599204 kernel: audit: type=1300 audit(1761957820.574:422): arch=c000003e syscall=46 success=yes exit=29396 a0=3 a1=7ffe3f851b10 a2=0 a3=7ffe3f851afc items=0 ppid=4079 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:40.574000 audit[4807]: NETFILTER_CFG table=filter:115 family=2 entries=54 op=nft_register_chain pid=4807 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:43:40.574000 audit[4807]: SYSCALL arch=c000003e syscall=46 success=yes exit=29396 a0=3 a1=7ffe3f851b10 a2=0 a3=7ffe3f851afc items=0 ppid=4079 pid=4807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:40.607494 kernel: audit: type=1327 audit(1761957820.574:422): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:43:40.574000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:43:40.679840 env[1822]: time="2025-11-01T00:43:40.679766211Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:43:40.707258 env[1822]: time="2025-11-01T00:43:40.707187391Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:43:40.743426 kubelet[2748]: E1101 00:43:40.743371 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:43:40.743680 kubelet[2748]: E1101 00:43:40.743640 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:43:40.744221 kubelet[2748]: E1101 00:43:40.744145 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ghsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68cc86985f-qb2p9_calico-apiserver(6d3c3149-9beb-44f8-a7ee-d6982872dcbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:43:40.750177 kubelet[2748]: E1101 00:43:40.750106 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68cc86985f-qb2p9" podUID="6d3c3149-9beb-44f8-a7ee-d6982872dcbb" Nov 1 00:43:40.759231 env[1822]: time="2025-11-01T00:43:40.759178071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:43:40.771498 kernel: audit: type=1325 audit(1761957820.759:423): table=filter:116 family=2 entries=110 op=nft_register_chain pid=4852 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:43:40.771775 kernel: audit: type=1300 audit(1761957820.759:423): arch=c000003e syscall=46 success=yes exit=62152 a0=3 a1=7ffe7e084bc0 a2=0 a3=7ffe7e084bac items=0 ppid=4079 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:40.759000 audit[4852]: NETFILTER_CFG table=filter:116 family=2 entries=110 op=nft_register_chain pid=4852 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:43:40.759000 audit[4852]: SYSCALL arch=c000003e syscall=46 success=yes exit=62152 a0=3 a1=7ffe7e084bc0 a2=0 a3=7ffe7e084bac items=0 ppid=4079 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:40.759000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:43:40.780426 kernel: audit: type=1327 audit(1761957820.759:423): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:43:40.799288 kubelet[2748]: E1101 00:43:40.798195 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68cc86985f-qb2p9" podUID="6d3c3149-9beb-44f8-a7ee-d6982872dcbb" Nov 1 00:43:40.859894 kernel: audit: type=1325 audit(1761957820.845:424): table=filter:117 family=2 entries=20 op=nft_register_rule pid=4858 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:40.860009 kernel: audit: type=1300 audit(1761957820.845:424): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd440b7e00 a2=0 a3=7ffd440b7dec items=0 ppid=2891 pid=4858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:40.845000 audit[4858]: NETFILTER_CFG table=filter:117 family=2 entries=20 op=nft_register_rule pid=4858 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:40.845000 audit[4858]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd440b7e00 a2=0 a3=7ffd440b7dec items=0 ppid=2891 pid=4858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:40.868861 kernel: audit: type=1327 audit(1761957820.845:424): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:40.845000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:40.870198 env[1822]: 2025-11-01 00:43:40.629 [INFO][4742] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" Nov 1 00:43:40.870198 env[1822]: 2025-11-01 00:43:40.630 [INFO][4742] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" iface="eth0" netns="/var/run/netns/cni-8f1437bc-e0a7-3d97-b5ea-210c4d5d69c5" Nov 1 00:43:40.870198 env[1822]: 2025-11-01 00:43:40.630 [INFO][4742] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" iface="eth0" netns="/var/run/netns/cni-8f1437bc-e0a7-3d97-b5ea-210c4d5d69c5" Nov 1 00:43:40.870198 env[1822]: 2025-11-01 00:43:40.638 [INFO][4742] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" iface="eth0" netns="/var/run/netns/cni-8f1437bc-e0a7-3d97-b5ea-210c4d5d69c5" Nov 1 00:43:40.870198 env[1822]: 2025-11-01 00:43:40.638 [INFO][4742] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" Nov 1 00:43:40.870198 env[1822]: 2025-11-01 00:43:40.638 [INFO][4742] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" Nov 1 00:43:40.870198 env[1822]: 2025-11-01 00:43:40.791 [INFO][4830] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" HandleID="k8s-pod-network.9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" Workload="ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0" Nov 1 00:43:40.870198 env[1822]: 2025-11-01 00:43:40.796 [INFO][4830] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:40.870198 env[1822]: 2025-11-01 00:43:40.797 [INFO][4830] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:40.870198 env[1822]: 2025-11-01 00:43:40.821 [WARNING][4830] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" HandleID="k8s-pod-network.9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" Workload="ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0" Nov 1 00:43:40.870198 env[1822]: 2025-11-01 00:43:40.821 [INFO][4830] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" HandleID="k8s-pod-network.9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" Workload="ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0" Nov 1 00:43:40.870198 env[1822]: 2025-11-01 00:43:40.825 [INFO][4830] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:40.870198 env[1822]: 2025-11-01 00:43:40.848 [INFO][4742] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" Nov 1 00:43:40.871747 env[1822]: time="2025-11-01T00:43:40.871703383Z" level=info msg="TearDown network for sandbox \"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853\" successfully" Nov 1 00:43:40.871928 env[1822]: time="2025-11-01T00:43:40.871897509Z" level=info msg="StopPodSandbox for \"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853\" returns successfully" Nov 1 00:43:40.867000 audit[4858]: NETFILTER_CFG table=nat:118 family=2 entries=14 op=nft_register_rule pid=4858 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:40.873273 env[1822]: time="2025-11-01T00:43:40.872929741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6db4456f5f-n6pzz,Uid:472f8f92-5499-46b7-8902-95424bad4337,Namespace:calico-system,Attempt:1,}" Nov 1 00:43:40.879365 kernel: audit: type=1325 audit(1761957820.867:425): table=nat:118 family=2 entries=14 op=nft_register_rule pid=4858 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:40.867000 audit[4858]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffd440b7e00 a2=0 a3=0 items=0 ppid=2891 pid=4858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:40.867000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:40.884132 env[1822]: time="2025-11-01T00:43:40.884086350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hj6zc,Uid:df0a1bba-62a6-45de-8540-a37de4852942,Namespace:kube-system,Attempt:1,} returns sandbox id \"9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162\"" Nov 1 00:43:40.895762 env[1822]: time="2025-11-01T00:43:40.895666579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dn6kn,Uid:6cd31c79-d021-4671-b1b1-16d458644a79,Namespace:calico-system,Attempt:1,} returns sandbox id \"2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311\"" Nov 1 00:43:40.924914 env[1822]: time="2025-11-01T00:43:40.924871659Z" level=info msg="CreateContainer within sandbox \"9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:43:40.949415 env[1822]: time="2025-11-01T00:43:40.947066333Z" level=info msg="CreateContainer within sandbox \"9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e87d49d622a387dfc64332e165f0f67e62e6b9452c05792588f622db27d2e622\"" Nov 1 00:43:40.952225 env[1822]: time="2025-11-01T00:43:40.952183089Z" level=info msg="StartContainer for \"e87d49d622a387dfc64332e165f0f67e62e6b9452c05792588f622db27d2e622\"" Nov 1 00:43:41.050489 env[1822]: time="2025-11-01T00:43:41.047622209Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:43:41.050489 env[1822]: time="2025-11-01T00:43:41.048874704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:43:41.050745 kubelet[2748]: E1101 00:43:41.049247 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:43:41.050745 kubelet[2748]: E1101 00:43:41.049308 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:43:41.050745 kubelet[2748]: E1101 00:43:41.049617 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4kdwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5lqpx_calico-system(9a6bbdac-9f73-4cc6-aadc-84424d8082ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:43:41.061079 env[1822]: time="2025-11-01T00:43:41.059416090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:43:41.074995 env[1822]: time="2025-11-01T00:43:41.074937372Z" level=info msg="StartContainer for \"e87d49d622a387dfc64332e165f0f67e62e6b9452c05792588f622db27d2e622\" returns successfully" Nov 1 00:43:41.100469 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6c5f1534742: link becomes ready Nov 1 00:43:41.101126 systemd-networkd[1497]: cali6c5f1534742: Link UP Nov 1 00:43:41.101293 systemd-networkd[1497]: cali6c5f1534742: Gained carrier Nov 1 00:43:41.123834 env[1822]: 2025-11-01 00:43:40.940 [INFO][4870] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0 calico-kube-controllers-6db4456f5f- calico-system 472f8f92-5499-46b7-8902-95424bad4337 989 0 2025-11-01 00:43:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6db4456f5f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-19-28 calico-kube-controllers-6db4456f5f-n6pzz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6c5f1534742 [] [] }} ContainerID="8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da" Namespace="calico-system" Pod="calico-kube-controllers-6db4456f5f-n6pzz" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-" Nov 1 00:43:41.123834 env[1822]: 2025-11-01 00:43:40.941 [INFO][4870] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da" Namespace="calico-system" Pod="calico-kube-controllers-6db4456f5f-n6pzz" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0" Nov 1 00:43:41.123834 env[1822]: 2025-11-01 00:43:40.987 [INFO][4884] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da" HandleID="k8s-pod-network.8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da" Workload="ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0" Nov 1 00:43:41.123834 env[1822]: 2025-11-01 00:43:40.988 [INFO][4884] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da" HandleID="k8s-pod-network.8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da" Workload="ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ccfe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-28", "pod":"calico-kube-controllers-6db4456f5f-n6pzz", "timestamp":"2025-11-01 00:43:40.987760307 +0000 UTC"}, Hostname:"ip-172-31-19-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:43:41.123834 env[1822]: 2025-11-01 00:43:40.988 [INFO][4884] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:41.123834 env[1822]: 2025-11-01 00:43:40.988 [INFO][4884] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:41.123834 env[1822]: 2025-11-01 00:43:40.988 [INFO][4884] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-28' Nov 1 00:43:41.123834 env[1822]: 2025-11-01 00:43:40.997 [INFO][4884] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da" host="ip-172-31-19-28" Nov 1 00:43:41.123834 env[1822]: 2025-11-01 00:43:41.006 [INFO][4884] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-28" Nov 1 00:43:41.123834 env[1822]: 2025-11-01 00:43:41.018 [INFO][4884] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:41.123834 env[1822]: 2025-11-01 00:43:41.021 [INFO][4884] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:41.123834 env[1822]: 2025-11-01 00:43:41.036 [INFO][4884] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:41.123834 env[1822]: 2025-11-01 00:43:41.036 [INFO][4884] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da" host="ip-172-31-19-28" Nov 1 00:43:41.123834 env[1822]: 2025-11-01 00:43:41.042 [INFO][4884] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da Nov 1 00:43:41.123834 env[1822]: 2025-11-01 00:43:41.070 [INFO][4884] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da" host="ip-172-31-19-28" Nov 1 00:43:41.123834 env[1822]: 2025-11-01 00:43:41.086 [INFO][4884] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.71/26] block=192.168.50.64/26 handle="k8s-pod-network.8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da" host="ip-172-31-19-28" Nov 1 00:43:41.123834 env[1822]: 2025-11-01 00:43:41.087 [INFO][4884] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.71/26] handle="k8s-pod-network.8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da" host="ip-172-31-19-28" Nov 1 00:43:41.123834 env[1822]: 2025-11-01 00:43:41.087 [INFO][4884] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:41.123834 env[1822]: 2025-11-01 00:43:41.087 [INFO][4884] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.71/26] IPv6=[] ContainerID="8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da" HandleID="k8s-pod-network.8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da" Workload="ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0" Nov 1 00:43:41.124885 env[1822]: 2025-11-01 00:43:41.091 [INFO][4870] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da" Namespace="calico-system" Pod="calico-kube-controllers-6db4456f5f-n6pzz" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0", GenerateName:"calico-kube-controllers-6db4456f5f-", Namespace:"calico-system", SelfLink:"", UID:"472f8f92-5499-46b7-8902-95424bad4337", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6db4456f5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"", Pod:"calico-kube-controllers-6db4456f5f-n6pzz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6c5f1534742", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:41.124885 env[1822]: 2025-11-01 00:43:41.091 [INFO][4870] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.71/32] ContainerID="8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da" Namespace="calico-system" Pod="calico-kube-controllers-6db4456f5f-n6pzz" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0" Nov 1 00:43:41.124885 env[1822]: 2025-11-01 00:43:41.091 [INFO][4870] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c5f1534742 ContainerID="8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da" Namespace="calico-system" Pod="calico-kube-controllers-6db4456f5f-n6pzz" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0" Nov 1 00:43:41.124885 env[1822]: 2025-11-01 00:43:41.092 [INFO][4870] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da" Namespace="calico-system" Pod="calico-kube-controllers-6db4456f5f-n6pzz" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0" Nov 1 00:43:41.124885 env[1822]: 2025-11-01 00:43:41.093 [INFO][4870] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da" Namespace="calico-system" Pod="calico-kube-controllers-6db4456f5f-n6pzz" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0", GenerateName:"calico-kube-controllers-6db4456f5f-", Namespace:"calico-system", SelfLink:"", UID:"472f8f92-5499-46b7-8902-95424bad4337", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6db4456f5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da", Pod:"calico-kube-controllers-6db4456f5f-n6pzz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6c5f1534742", MAC:"2e:ed:eb:28:dc:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:41.124885 env[1822]: 2025-11-01 00:43:41.104 [INFO][4870] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da" Namespace="calico-system" Pod="calico-kube-controllers-6db4456f5f-n6pzz" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0" Nov 1 00:43:41.159000 audit[4933]: NETFILTER_CFG table=filter:119 family=2 entries=52 op=nft_register_chain pid=4933 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:43:41.159000 audit[4933]: SYSCALL arch=c000003e syscall=46 success=yes exit=24312 a0=3 a1=7ffc6e4aeb30 a2=0 a3=7ffc6e4aeb1c items=0 ppid=4079 pid=4933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.159000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:43:41.171436 env[1822]: time="2025-11-01T00:43:41.171307407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:41.171767 env[1822]: time="2025-11-01T00:43:41.171728337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:41.172033 env[1822]: time="2025-11-01T00:43:41.171981709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:41.172814 env[1822]: time="2025-11-01T00:43:41.172724323Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da pid=4938 runtime=io.containerd.runc.v2 Nov 1 00:43:41.270296 env[1822]: time="2025-11-01T00:43:41.270244794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6db4456f5f-n6pzz,Uid:472f8f92-5499-46b7-8902-95424bad4337,Namespace:calico-system,Attempt:1,} returns sandbox id \"8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da\"" Nov 1 00:43:41.319596 env[1822]: time="2025-11-01T00:43:41.319539516Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:43:41.320763 env[1822]: time="2025-11-01T00:43:41.320699902Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:43:41.321071 kubelet[2748]: E1101 00:43:41.321031 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:43:41.321639 kubelet[2748]: E1101 00:43:41.321604 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:43:41.322074 kubelet[2748]: E1101 00:43:41.322011 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wplvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dn6kn_calico-system(6cd31c79-d021-4671-b1b1-16d458644a79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:43:41.322887 env[1822]: time="2025-11-01T00:43:41.322845810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:43:41.323485 kubelet[2748]: E1101 00:43:41.323442 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dn6kn" podUID="6cd31c79-d021-4671-b1b1-16d458644a79" Nov 1 00:43:41.352234 env[1822]: time="2025-11-01T00:43:41.352198358Z" level=info msg="StopPodSandbox for \"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92\"" Nov 1 00:43:41.465317 env[1822]: 2025-11-01 00:43:41.409 [INFO][4987] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" Nov 1 00:43:41.465317 env[1822]: 2025-11-01 00:43:41.410 [INFO][4987] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" iface="eth0" netns="/var/run/netns/cni-c54ecd44-da9d-31da-0a73-cc18bb58a13a" Nov 1 00:43:41.465317 env[1822]: 2025-11-01 00:43:41.411 [INFO][4987] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" iface="eth0" netns="/var/run/netns/cni-c54ecd44-da9d-31da-0a73-cc18bb58a13a" Nov 1 00:43:41.465317 env[1822]: 2025-11-01 00:43:41.411 [INFO][4987] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" iface="eth0" netns="/var/run/netns/cni-c54ecd44-da9d-31da-0a73-cc18bb58a13a" Nov 1 00:43:41.465317 env[1822]: 2025-11-01 00:43:41.411 [INFO][4987] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" Nov 1 00:43:41.465317 env[1822]: 2025-11-01 00:43:41.411 [INFO][4987] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" Nov 1 00:43:41.465317 env[1822]: 2025-11-01 00:43:41.448 [INFO][4994] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" HandleID="k8s-pod-network.f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0" Nov 1 00:43:41.465317 env[1822]: 2025-11-01 00:43:41.448 [INFO][4994] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:41.465317 env[1822]: 2025-11-01 00:43:41.448 [INFO][4994] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:41.465317 env[1822]: 2025-11-01 00:43:41.457 [WARNING][4994] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" HandleID="k8s-pod-network.f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0" Nov 1 00:43:41.465317 env[1822]: 2025-11-01 00:43:41.457 [INFO][4994] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" HandleID="k8s-pod-network.f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0" Nov 1 00:43:41.465317 env[1822]: 2025-11-01 00:43:41.459 [INFO][4994] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:41.465317 env[1822]: 2025-11-01 00:43:41.463 [INFO][4987] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" Nov 1 00:43:41.466056 env[1822]: time="2025-11-01T00:43:41.466004424Z" level=info msg="TearDown network for sandbox \"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92\" successfully" Nov 1 00:43:41.466056 env[1822]: time="2025-11-01T00:43:41.466052349Z" level=info msg="StopPodSandbox for \"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92\" returns successfully" Nov 1 00:43:41.466967 env[1822]: time="2025-11-01T00:43:41.466930917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cc86985f-ffnk6,Uid:94ef4085-6c01-4795-a191-98e0030c89bd,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:43:41.485309 systemd[1]: run-containerd-runc-k8s.io-2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311-runc.EpmnXn.mount: Deactivated successfully. Nov 1 00:43:41.485475 systemd[1]: run-netns-cni\x2d8f1437bc\x2de0a7\x2d3d97\x2db5ea\x2d210c4d5d69c5.mount: Deactivated successfully. Nov 1 00:43:41.485564 systemd[1]: run-netns-cni\x2dc54ecd44\x2dda9d\x2d31da\x2d0a73\x2dcc18bb58a13a.mount: Deactivated successfully. Nov 1 00:43:41.553028 env[1822]: time="2025-11-01T00:43:41.552979452Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:43:41.554750 env[1822]: time="2025-11-01T00:43:41.554571077Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:43:41.555795 kubelet[2748]: E1101 00:43:41.555154 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:43:41.555795 kubelet[2748]: E1101 00:43:41.555208 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:43:41.555795 kubelet[2748]: E1101 00:43:41.555486 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4kdwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5lqpx_calico-system(9a6bbdac-9f73-4cc6-aadc-84424d8082ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:43:41.557139 kubelet[2748]: E1101 00:43:41.557050 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5lqpx" podUID="9a6bbdac-9f73-4cc6-aadc-84424d8082ea" Nov 1 00:43:41.557582 env[1822]: time="2025-11-01T00:43:41.557547242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:43:41.661423 systemd-networkd[1497]: caliac721c63fb8: Link UP Nov 1 00:43:41.670742 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:43:41.670839 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliac721c63fb8: link becomes ready Nov 1 00:43:41.670421 systemd-networkd[1497]: caliac721c63fb8: Gained carrier Nov 1 00:43:41.693519 env[1822]: 2025-11-01 00:43:41.538 [INFO][5000] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0 calico-apiserver-68cc86985f- calico-apiserver 94ef4085-6c01-4795-a191-98e0030c89bd 1012 0 2025-11-01 00:43:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68cc86985f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-28 calico-apiserver-68cc86985f-ffnk6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliac721c63fb8 [] [] }} ContainerID="483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316" Namespace="calico-apiserver" Pod="calico-apiserver-68cc86985f-ffnk6" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-" Nov 1 00:43:41.693519 env[1822]: 2025-11-01 00:43:41.538 [INFO][5000] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316" Namespace="calico-apiserver" Pod="calico-apiserver-68cc86985f-ffnk6" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0" Nov 1 00:43:41.693519 env[1822]: 2025-11-01 00:43:41.596 [INFO][5013] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316" HandleID="k8s-pod-network.483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0" Nov 1 00:43:41.693519 env[1822]: 2025-11-01 00:43:41.597 [INFO][5013] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316" HandleID="k8s-pod-network.483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad370), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-28", "pod":"calico-apiserver-68cc86985f-ffnk6", "timestamp":"2025-11-01 00:43:41.596839198 +0000 UTC"}, Hostname:"ip-172-31-19-28", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:43:41.693519 env[1822]: 2025-11-01 00:43:41.597 [INFO][5013] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:41.693519 env[1822]: 2025-11-01 00:43:41.597 [INFO][5013] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:41.693519 env[1822]: 2025-11-01 00:43:41.597 [INFO][5013] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-28' Nov 1 00:43:41.693519 env[1822]: 2025-11-01 00:43:41.608 [INFO][5013] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316" host="ip-172-31-19-28" Nov 1 00:43:41.693519 env[1822]: 2025-11-01 00:43:41.615 [INFO][5013] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-19-28" Nov 1 00:43:41.693519 env[1822]: 2025-11-01 00:43:41.620 [INFO][5013] ipam/ipam.go 511: Trying affinity for 192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:41.693519 env[1822]: 2025-11-01 00:43:41.622 [INFO][5013] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:41.693519 env[1822]: 2025-11-01 00:43:41.625 [INFO][5013] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ip-172-31-19-28" Nov 1 00:43:41.693519 env[1822]: 2025-11-01 00:43:41.625 [INFO][5013] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316" host="ip-172-31-19-28" Nov 1 00:43:41.693519 env[1822]: 2025-11-01 00:43:41.633 [INFO][5013] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316 Nov 1 00:43:41.693519 env[1822]: 2025-11-01 00:43:41.639 [INFO][5013] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316" host="ip-172-31-19-28" Nov 1 00:43:41.693519 env[1822]: 2025-11-01 00:43:41.650 [INFO][5013] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.72/26] block=192.168.50.64/26 handle="k8s-pod-network.483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316" host="ip-172-31-19-28" Nov 1 00:43:41.693519 env[1822]: 2025-11-01 00:43:41.651 [INFO][5013] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.72/26] handle="k8s-pod-network.483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316" host="ip-172-31-19-28" Nov 1 00:43:41.693519 env[1822]: 2025-11-01 00:43:41.651 [INFO][5013] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:41.693519 env[1822]: 2025-11-01 00:43:41.651 [INFO][5013] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.72/26] IPv6=[] ContainerID="483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316" HandleID="k8s-pod-network.483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0" Nov 1 00:43:41.694163 env[1822]: 2025-11-01 00:43:41.655 [INFO][5000] cni-plugin/k8s.go 418: Populated endpoint ContainerID="483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316" Namespace="calico-apiserver" Pod="calico-apiserver-68cc86985f-ffnk6" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0", GenerateName:"calico-apiserver-68cc86985f-", Namespace:"calico-apiserver", SelfLink:"", UID:"94ef4085-6c01-4795-a191-98e0030c89bd", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cc86985f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"", Pod:"calico-apiserver-68cc86985f-ffnk6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac721c63fb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:41.694163 env[1822]: 2025-11-01 00:43:41.655 [INFO][5000] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.72/32] ContainerID="483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316" Namespace="calico-apiserver" Pod="calico-apiserver-68cc86985f-ffnk6" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0" Nov 1 00:43:41.694163 env[1822]: 2025-11-01 00:43:41.655 [INFO][5000] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac721c63fb8 ContainerID="483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316" Namespace="calico-apiserver" Pod="calico-apiserver-68cc86985f-ffnk6" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0" Nov 1 00:43:41.694163 env[1822]: 2025-11-01 00:43:41.672 [INFO][5000] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316" Namespace="calico-apiserver" Pod="calico-apiserver-68cc86985f-ffnk6" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0" Nov 1 00:43:41.694163 env[1822]: 2025-11-01 00:43:41.672 [INFO][5000] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316" Namespace="calico-apiserver" Pod="calico-apiserver-68cc86985f-ffnk6" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0", GenerateName:"calico-apiserver-68cc86985f-", Namespace:"calico-apiserver", SelfLink:"", UID:"94ef4085-6c01-4795-a191-98e0030c89bd", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cc86985f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316", Pod:"calico-apiserver-68cc86985f-ffnk6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac721c63fb8", MAC:"56:eb:1a:82:06:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:41.694163 env[1822]: 2025-11-01 00:43:41.687 [INFO][5000] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316" Namespace="calico-apiserver" Pod="calico-apiserver-68cc86985f-ffnk6" WorkloadEndpoint="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0" Nov 1 00:43:41.711426 env[1822]: time="2025-11-01T00:43:41.711308479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:43:41.711898 env[1822]: time="2025-11-01T00:43:41.711850294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:43:41.712032 env[1822]: time="2025-11-01T00:43:41.712011514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:43:41.712428 env[1822]: time="2025-11-01T00:43:41.712380992Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316 pid=5033 runtime=io.containerd.runc.v2 Nov 1 00:43:41.735000 audit[5046]: NETFILTER_CFG table=filter:120 family=2 entries=57 op=nft_register_chain pid=5046 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 00:43:41.735000 audit[5046]: SYSCALL arch=c000003e syscall=46 success=yes exit=27812 a0=3 a1=7ffdc30fabd0 a2=0 a3=7ffdc30fabbc items=0 ppid=4079 pid=5046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.735000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 00:43:41.795291 env[1822]: time="2025-11-01T00:43:41.795248622Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:43:41.797186 env[1822]: time="2025-11-01T00:43:41.797127605Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:43:41.798243 kubelet[2748]: E1101 00:43:41.797622 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:43:41.798243 kubelet[2748]: E1101 00:43:41.797683 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:43:41.798243 kubelet[2748]: E1101 00:43:41.797868 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjd2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6db4456f5f-n6pzz_calico-system(472f8f92-5499-46b7-8902-95424bad4337): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:43:41.799404 kubelet[2748]: E1101 00:43:41.799303 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6db4456f5f-n6pzz" podUID="472f8f92-5499-46b7-8902-95424bad4337" Nov 1 00:43:41.819808 kubelet[2748]: E1101 00:43:41.819656 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dn6kn" podUID="6cd31c79-d021-4671-b1b1-16d458644a79" Nov 1 00:43:41.823722 kubelet[2748]: E1101 00:43:41.822833 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6db4456f5f-n6pzz" podUID="472f8f92-5499-46b7-8902-95424bad4337" Nov 1 00:43:41.829451 kubelet[2748]: E1101 00:43:41.829411 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68cc86985f-qb2p9" podUID="6d3c3149-9beb-44f8-a7ee-d6982872dcbb" Nov 1 00:43:41.830394 kubelet[2748]: E1101 00:43:41.830331 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5lqpx" podUID="9a6bbdac-9f73-4cc6-aadc-84424d8082ea" Nov 1 00:43:41.832741 env[1822]: time="2025-11-01T00:43:41.832696725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68cc86985f-ffnk6,Uid:94ef4085-6c01-4795-a191-98e0030c89bd,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316\"" Nov 1 00:43:41.834887 env[1822]: time="2025-11-01T00:43:41.834837744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:43:41.876000 audit[5070]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=5070 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:41.876000 audit[5070]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdc1eec6f0 a2=0 a3=7ffdc1eec6dc items=0 ppid=2891 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.876000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:41.882083 kubelet[2748]: I1101 00:43:41.882009 2748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hj6zc" podStartSLOduration=44.881969872 podStartE2EDuration="44.881969872s" podCreationTimestamp="2025-11-01 00:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:43:41.879877568 +0000 UTC m=+48.925650252" watchObservedRunningTime="2025-11-01 00:43:41.881969872 +0000 UTC m=+48.927742554" Nov 1 00:43:41.895000 audit[5070]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=5070 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:41.895000 audit[5070]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdc1eec6f0 a2=0 a3=0 items=0 ppid=2891 pid=5070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.895000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:41.917757 systemd-networkd[1497]: cali98e527ed158: Gained IPv6LL Nov 1 00:43:41.918202 systemd-networkd[1497]: calie2a264c2bbd: Gained IPv6LL Nov 1 00:43:41.946000 audit[5072]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=5072 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:41.946000 audit[5072]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff42f3f200 a2=0 a3=7fff42f3f1ec items=0 ppid=2891 pid=5072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.946000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:41.950000 audit[5072]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=5072 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:41.950000 audit[5072]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff42f3f200 a2=0 a3=0 items=0 ppid=2891 pid=5072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:41.950000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:42.040516 systemd-networkd[1497]: calib7f7db156e6: Gained IPv6LL Nov 1 00:43:42.063332 env[1822]: time="2025-11-01T00:43:42.063266114Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:43:42.064684 env[1822]: time="2025-11-01T00:43:42.064519671Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:43:42.065016 kubelet[2748]: E1101 00:43:42.064975 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:43:42.065189 kubelet[2748]: E1101 00:43:42.065141 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:43:42.065417 kubelet[2748]: E1101 00:43:42.065328 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-st25d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68cc86985f-ffnk6_calico-apiserver(94ef4085-6c01-4795-a191-98e0030c89bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:43:42.066979 kubelet[2748]: E1101 00:43:42.066942 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68cc86985f-ffnk6" podUID="94ef4085-6c01-4795-a191-98e0030c89bd" Nov 1 00:43:42.360546 systemd-networkd[1497]: cali1c5b8baa747: Gained IPv6LL Nov 1 00:43:42.360904 systemd-networkd[1497]: cali6c5f1534742: Gained IPv6LL Nov 1 00:43:42.831701 kubelet[2748]: E1101 00:43:42.831641 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68cc86985f-ffnk6" podUID="94ef4085-6c01-4795-a191-98e0030c89bd" Nov 1 00:43:42.834883 kubelet[2748]: E1101 00:43:42.834837 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6db4456f5f-n6pzz" podUID="472f8f92-5499-46b7-8902-95424bad4337" Nov 1 00:43:42.836049 kubelet[2748]: E1101 00:43:42.836008 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dn6kn" podUID="6cd31c79-d021-4671-b1b1-16d458644a79" Nov 1 00:43:42.959000 audit[5080]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=5080 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:42.959000 audit[5080]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe774bb660 a2=0 a3=7ffe774bb64c items=0 ppid=2891 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:42.959000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:42.965000 audit[5080]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=5080 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:42.965000 audit[5080]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe774bb660 a2=0 a3=0 items=0 ppid=2891 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:42.965000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:43.512617 systemd-networkd[1497]: caliac721c63fb8: Gained IPv6LL Nov 1 00:43:43.843877 kubelet[2748]: E1101 00:43:43.843769 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68cc86985f-ffnk6" podUID="94ef4085-6c01-4795-a191-98e0030c89bd" Nov 1 00:43:44.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.19.28:22-147.75.109.163:50986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:44.406377 systemd[1]: Started sshd@8-172.31.19.28:22-147.75.109.163:50986.service. Nov 1 00:43:44.618000 audit[5082]: USER_ACCT pid=5082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:44.619000 audit[5082]: CRED_ACQ pid=5082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:44.619000 audit[5082]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff6dcfb7b0 a2=3 a3=0 items=0 ppid=1 pid=5082 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:44.619000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:43:44.622878 sshd[5082]: Accepted publickey for core from 147.75.109.163 port 50986 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:43:44.623064 sshd[5082]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:44.642015 systemd-logind[1803]: New session 9 of user core. Nov 1 00:43:44.643212 systemd[1]: Started session-9.scope. Nov 1 00:43:44.649000 audit[5082]: USER_START pid=5082 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:44.651000 audit[5085]: CRED_ACQ pid=5085 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:45.159265 sshd[5082]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:45.159000 audit[5082]: USER_END pid=5082 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:45.159000 audit[5082]: CRED_DISP pid=5082 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:45.162927 systemd[1]: sshd@8-172.31.19.28:22-147.75.109.163:50986.service: Deactivated successfully. Nov 1 00:43:45.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.19.28:22-147.75.109.163:50986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:45.164361 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:43:45.165128 systemd-logind[1803]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:43:45.167100 systemd-logind[1803]: Removed session 9. Nov 1 00:43:46.419774 kubelet[2748]: I1101 00:43:46.419662 2748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:43:46.475036 systemd[1]: run-containerd-runc-k8s.io-8af868a050fed176d64ca5615531f2663c1dc93bce5246a2d030017002f6145e-runc.yrKaSU.mount: Deactivated successfully. Nov 1 00:43:46.671445 systemd[1]: run-containerd-runc-k8s.io-8af868a050fed176d64ca5615531f2663c1dc93bce5246a2d030017002f6145e-runc.XBiup8.mount: Deactivated successfully. Nov 1 00:43:50.186010 systemd[1]: Started sshd@9-172.31.19.28:22-147.75.109.163:53516.service. Nov 1 00:43:50.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.19.28:22-147.75.109.163:53516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:50.192381 kernel: kauditd_printk_skb: 37 callbacks suppressed Nov 1 00:43:50.192521 kernel: audit: type=1130 audit(1761957830.185:443): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.19.28:22-147.75.109.163:53516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:50.364000 audit[5146]: USER_ACCT pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:50.370366 kernel: audit: type=1101 audit(1761957830.364:444): pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:50.370410 sshd[5146]: Accepted publickey for core from 147.75.109.163 port 53516 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:43:50.368000 audit[5146]: CRED_ACQ pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:50.371277 sshd[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:50.379469 kernel: audit: type=1103 audit(1761957830.368:445): pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:50.379592 kernel: audit: type=1006 audit(1761957830.369:446): pid=5146 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Nov 1 00:43:50.369000 audit[5146]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc98530260 a2=3 a3=0 items=0 ppid=1 pid=5146 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:50.384605 kernel: audit: type=1300 audit(1761957830.369:446): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc98530260 a2=3 a3=0 items=0 ppid=1 pid=5146 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:50.369000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:43:50.387516 kernel: audit: type=1327 audit(1761957830.369:446): proctitle=737368643A20636F7265205B707269765D Nov 1 00:43:50.389922 systemd-logind[1803]: New session 10 of user core. Nov 1 00:43:50.390464 systemd[1]: Started session-10.scope. Nov 1 00:43:50.394000 audit[5146]: USER_START pid=5146 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:50.396000 audit[5149]: CRED_ACQ pid=5149 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:50.407561 kernel: audit: type=1105 audit(1761957830.394:447): pid=5146 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:50.407655 kernel: audit: type=1103 audit(1761957830.396:448): pid=5149 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:50.641297 sshd[5146]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:50.646000 audit[5146]: USER_END pid=5146 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:50.655256 systemd[1]: sshd@9-172.31.19.28:22-147.75.109.163:53516.service: Deactivated successfully. Nov 1 00:43:50.656437 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:43:50.666685 systemd[1]: Started sshd@10-172.31.19.28:22-147.75.109.163:53520.service. Nov 1 00:43:50.668633 kernel: audit: type=1106 audit(1761957830.646:449): pid=5146 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:50.671903 systemd-logind[1803]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:43:50.682839 systemd-logind[1803]: Removed session 10. Nov 1 00:43:50.646000 audit[5146]: CRED_DISP pid=5146 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:50.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.19.28:22-147.75.109.163:53516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:50.694468 kernel: audit: type=1104 audit(1761957830.646:450): pid=5146 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:50.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.19.28:22-147.75.109.163:53520 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:50.834000 audit[5158]: USER_ACCT pid=5158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:50.835000 audit[5158]: CRED_ACQ pid=5158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:50.835000 audit[5158]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeeb3e8e80 a2=3 a3=0 items=0 ppid=1 pid=5158 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:50.835000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:43:50.850000 audit[5158]: USER_START pid=5158 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:50.852000 audit[5162]: CRED_ACQ pid=5162 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:50.843608 systemd[1]: Started session-11.scope. Nov 1 00:43:50.837528 sshd[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:50.873420 sshd[5158]: Accepted publickey for core from 147.75.109.163 port 53520 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:43:50.843929 systemd-logind[1803]: New session 11 of user core. Nov 1 00:43:51.139737 sshd[5158]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:51.142000 audit[5158]: USER_END pid=5158 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:51.143000 audit[5158]: CRED_DISP pid=5158 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:51.146971 systemd-logind[1803]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:43:51.147266 systemd[1]: sshd@10-172.31.19.28:22-147.75.109.163:53520.service: Deactivated successfully. Nov 1 00:43:51.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.19.28:22-147.75.109.163:53520 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:51.148455 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:43:51.150394 systemd-logind[1803]: Removed session 11. Nov 1 00:43:51.163967 systemd[1]: Started sshd@11-172.31.19.28:22-147.75.109.163:53522.service. Nov 1 00:43:51.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.19.28:22-147.75.109.163:53522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:51.336000 audit[5170]: USER_ACCT pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:51.339436 sshd[5170]: Accepted publickey for core from 147.75.109.163 port 53522 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:43:51.338000 audit[5170]: CRED_ACQ pid=5170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:51.338000 audit[5170]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeb8490c00 a2=3 a3=0 items=0 ppid=1 pid=5170 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:51.338000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:43:51.340245 sshd[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:51.345409 systemd-logind[1803]: New session 12 of user core. Nov 1 00:43:51.346482 systemd[1]: Started session-12.scope. Nov 1 00:43:51.353000 audit[5170]: USER_START pid=5170 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:51.357000 audit[5173]: CRED_ACQ pid=5173 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:51.362277 env[1822]: time="2025-11-01T00:43:51.361946152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:43:51.598434 env[1822]: time="2025-11-01T00:43:51.598374300Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:43:51.599799 env[1822]: time="2025-11-01T00:43:51.599735474Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:43:51.600065 kubelet[2748]: E1101 00:43:51.600015 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:43:51.600540 kubelet[2748]: E1101 00:43:51.600069 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:43:51.600540 kubelet[2748]: E1101 00:43:51.600222 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b7379b16cfad4b00a1e9214c9508a19a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8vkgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78c5cf549b-vmbkl_calico-system(3c464f2f-5c33-4de3-9c8e-29f197089e35): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:43:51.603169 env[1822]: time="2025-11-01T00:43:51.603122541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:43:51.613112 sshd[5170]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:51.614000 audit[5170]: USER_END pid=5170 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:51.614000 audit[5170]: CRED_DISP pid=5170 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:51.618886 systemd[1]: sshd@11-172.31.19.28:22-147.75.109.163:53522.service: Deactivated successfully. Nov 1 00:43:51.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.19.28:22-147.75.109.163:53522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:51.619799 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:43:51.620221 systemd-logind[1803]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:43:51.621099 systemd-logind[1803]: Removed session 12. Nov 1 00:43:51.850809 env[1822]: time="2025-11-01T00:43:51.850666953Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:43:51.852027 env[1822]: time="2025-11-01T00:43:51.851972114Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:43:51.852479 kubelet[2748]: E1101 00:43:51.852409 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:43:51.852617 kubelet[2748]: E1101 00:43:51.852515 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:43:51.852658 kubelet[2748]: E1101 00:43:51.852629 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8vkgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78c5cf549b-vmbkl_calico-system(3c464f2f-5c33-4de3-9c8e-29f197089e35): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:43:51.854242 kubelet[2748]: E1101 00:43:51.854170 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78c5cf549b-vmbkl" podUID="3c464f2f-5c33-4de3-9c8e-29f197089e35" Nov 1 00:43:51.876000 audit[5185]: NETFILTER_CFG table=filter:127 family=2 entries=17 op=nft_register_rule pid=5185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:51.876000 audit[5185]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe8fc0c950 a2=0 a3=7ffe8fc0c93c items=0 ppid=2891 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:51.876000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:51.881000 audit[5185]: NETFILTER_CFG table=nat:128 family=2 entries=35 op=nft_register_chain pid=5185 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:51.881000 audit[5185]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe8fc0c950 a2=0 a3=7ffe8fc0c93c items=0 ppid=2891 pid=5185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:51.881000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:52.350763 env[1822]: time="2025-11-01T00:43:52.350467132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:43:52.593060 env[1822]: time="2025-11-01T00:43:52.593004888Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:43:52.594763 env[1822]: time="2025-11-01T00:43:52.594688288Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:43:52.595092 kubelet[2748]: E1101 00:43:52.595052 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:43:52.595189 kubelet[2748]: E1101 00:43:52.595118 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:43:52.595361 kubelet[2748]: E1101 00:43:52.595279 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ghsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68cc86985f-qb2p9_calico-apiserver(6d3c3149-9beb-44f8-a7ee-d6982872dcbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:43:52.596994 kubelet[2748]: E1101 00:43:52.596962 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68cc86985f-qb2p9" podUID="6d3c3149-9beb-44f8-a7ee-d6982872dcbb" Nov 1 00:43:52.878000 audit[5187]: NETFILTER_CFG table=filter:129 family=2 entries=14 op=nft_register_rule pid=5187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:52.878000 audit[5187]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdeb974310 a2=0 a3=7ffdeb9742fc items=0 ppid=2891 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:52.878000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:52.894000 audit[5187]: NETFILTER_CFG table=nat:130 family=2 entries=56 op=nft_register_chain pid=5187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:43:52.894000 audit[5187]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffdeb974310 a2=0 a3=7ffdeb9742fc items=0 ppid=2891 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:52.894000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:43:53.336643 env[1822]: time="2025-11-01T00:43:53.336599397Z" level=info msg="StopPodSandbox for \"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0\"" Nov 1 00:43:53.353631 env[1822]: time="2025-11-01T00:43:53.353149870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:43:53.634216 env[1822]: time="2025-11-01T00:43:53.634022346Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:43:53.636804 env[1822]: time="2025-11-01T00:43:53.636723591Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:43:53.637298 kubelet[2748]: E1101 00:43:53.637215 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:43:53.639700 kubelet[2748]: E1101 00:43:53.637329 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:43:53.639700 kubelet[2748]: E1101 00:43:53.637651 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wplvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dn6kn_calico-system(6cd31c79-d021-4671-b1b1-16d458644a79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:43:53.641202 env[1822]: time="2025-11-01T00:43:53.638915568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:43:53.641328 kubelet[2748]: E1101 00:43:53.639697 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dn6kn" podUID="6cd31c79-d021-4671-b1b1-16d458644a79" Nov 1 00:43:53.644831 env[1822]: 2025-11-01 00:43:53.596 [WARNING][5200] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6cd31c79-d021-4671-b1b1-16d458644a79", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311", Pod:"goldmane-666569f655-dn6kn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1c5b8baa747", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:53.644831 env[1822]: 2025-11-01 00:43:53.596 [INFO][5200] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" Nov 1 00:43:53.644831 env[1822]: 2025-11-01 00:43:53.596 [INFO][5200] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" iface="eth0" netns="" Nov 1 00:43:53.644831 env[1822]: 2025-11-01 00:43:53.596 [INFO][5200] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" Nov 1 00:43:53.644831 env[1822]: 2025-11-01 00:43:53.596 [INFO][5200] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" Nov 1 00:43:53.644831 env[1822]: 2025-11-01 00:43:53.623 [INFO][5207] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" HandleID="k8s-pod-network.6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" Workload="ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0" Nov 1 00:43:53.644831 env[1822]: 2025-11-01 00:43:53.624 [INFO][5207] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:53.644831 env[1822]: 2025-11-01 00:43:53.624 [INFO][5207] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:53.644831 env[1822]: 2025-11-01 00:43:53.630 [WARNING][5207] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" HandleID="k8s-pod-network.6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" Workload="ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0" Nov 1 00:43:53.644831 env[1822]: 2025-11-01 00:43:53.630 [INFO][5207] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" HandleID="k8s-pod-network.6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" Workload="ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0" Nov 1 00:43:53.644831 env[1822]: 2025-11-01 00:43:53.633 [INFO][5207] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:53.644831 env[1822]: 2025-11-01 00:43:53.636 [INFO][5200] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" Nov 1 00:43:53.646514 env[1822]: time="2025-11-01T00:43:53.644862846Z" level=info msg="TearDown network for sandbox \"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0\" successfully" Nov 1 00:43:53.646514 env[1822]: time="2025-11-01T00:43:53.644898881Z" level=info msg="StopPodSandbox for \"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0\" returns successfully" Nov 1 00:43:53.650997 env[1822]: time="2025-11-01T00:43:53.650937855Z" level=info msg="RemovePodSandbox for \"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0\"" Nov 1 00:43:53.651166 env[1822]: time="2025-11-01T00:43:53.651001145Z" level=info msg="Forcibly stopping sandbox \"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0\"" Nov 1 00:43:53.738900 env[1822]: 2025-11-01 00:43:53.695 [WARNING][5222] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6cd31c79-d021-4671-b1b1-16d458644a79", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"2b5e0e0006d1aea946b295a5fde10ce58ac85e832ce3c3e557df7d6bdd65c311", Pod:"goldmane-666569f655-dn6kn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1c5b8baa747", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:53.738900 env[1822]: 2025-11-01 00:43:53.696 [INFO][5222] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" Nov 1 00:43:53.738900 env[1822]: 2025-11-01 00:43:53.696 [INFO][5222] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" iface="eth0" netns="" Nov 1 00:43:53.738900 env[1822]: 2025-11-01 00:43:53.696 [INFO][5222] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" Nov 1 00:43:53.738900 env[1822]: 2025-11-01 00:43:53.696 [INFO][5222] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" Nov 1 00:43:53.738900 env[1822]: 2025-11-01 00:43:53.725 [INFO][5229] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" HandleID="k8s-pod-network.6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" Workload="ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0" Nov 1 00:43:53.738900 env[1822]: 2025-11-01 00:43:53.725 [INFO][5229] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:53.738900 env[1822]: 2025-11-01 00:43:53.725 [INFO][5229] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:53.738900 env[1822]: 2025-11-01 00:43:53.731 [WARNING][5229] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" HandleID="k8s-pod-network.6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" Workload="ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0" Nov 1 00:43:53.738900 env[1822]: 2025-11-01 00:43:53.731 [INFO][5229] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" HandleID="k8s-pod-network.6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" Workload="ip--172--31--19--28-k8s-goldmane--666569f655--dn6kn-eth0" Nov 1 00:43:53.738900 env[1822]: 2025-11-01 00:43:53.734 [INFO][5229] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:53.738900 env[1822]: 2025-11-01 00:43:53.736 [INFO][5222] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0" Nov 1 00:43:53.739665 env[1822]: time="2025-11-01T00:43:53.738938522Z" level=info msg="TearDown network for sandbox \"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0\" successfully" Nov 1 00:43:53.744967 env[1822]: time="2025-11-01T00:43:53.744912376Z" level=info msg="RemovePodSandbox \"6a94341bda6d58c2e87fb1f1bc9633230beb1f427ac3841f553b6e893cdd57f0\" returns successfully" Nov 1 00:43:53.746083 env[1822]: time="2025-11-01T00:43:53.746045630Z" level=info msg="StopPodSandbox for \"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44\"" Nov 1 00:43:53.835811 env[1822]: 2025-11-01 00:43:53.793 [WARNING][5243] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" WorkloadEndpoint="ip--172--31--19--28-k8s-whisker--7f96c4696b--wc2sq-eth0" Nov 1 00:43:53.835811 env[1822]: 2025-11-01 00:43:53.793 [INFO][5243] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" Nov 1 00:43:53.835811 env[1822]: 2025-11-01 00:43:53.793 [INFO][5243] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" iface="eth0" netns="" Nov 1 00:43:53.835811 env[1822]: 2025-11-01 00:43:53.793 [INFO][5243] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" Nov 1 00:43:53.835811 env[1822]: 2025-11-01 00:43:53.793 [INFO][5243] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" Nov 1 00:43:53.835811 env[1822]: 2025-11-01 00:43:53.819 [INFO][5250] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" HandleID="k8s-pod-network.cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" Workload="ip--172--31--19--28-k8s-whisker--7f96c4696b--wc2sq-eth0" Nov 1 00:43:53.835811 env[1822]: 2025-11-01 00:43:53.820 [INFO][5250] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:53.835811 env[1822]: 2025-11-01 00:43:53.820 [INFO][5250] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:53.835811 env[1822]: 2025-11-01 00:43:53.829 [WARNING][5250] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" HandleID="k8s-pod-network.cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" Workload="ip--172--31--19--28-k8s-whisker--7f96c4696b--wc2sq-eth0" Nov 1 00:43:53.835811 env[1822]: 2025-11-01 00:43:53.829 [INFO][5250] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" HandleID="k8s-pod-network.cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" Workload="ip--172--31--19--28-k8s-whisker--7f96c4696b--wc2sq-eth0" Nov 1 00:43:53.835811 env[1822]: 2025-11-01 00:43:53.831 [INFO][5250] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:53.835811 env[1822]: 2025-11-01 00:43:53.833 [INFO][5243] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" Nov 1 00:43:53.836427 env[1822]: time="2025-11-01T00:43:53.835844107Z" level=info msg="TearDown network for sandbox \"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44\" successfully" Nov 1 00:43:53.836427 env[1822]: time="2025-11-01T00:43:53.835880661Z" level=info msg="StopPodSandbox for \"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44\" returns successfully" Nov 1 00:43:53.836526 env[1822]: time="2025-11-01T00:43:53.836435548Z" level=info msg="RemovePodSandbox for \"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44\"" Nov 1 00:43:53.836526 env[1822]: time="2025-11-01T00:43:53.836475995Z" level=info msg="Forcibly stopping sandbox \"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44\"" Nov 1 00:43:53.893193 env[1822]: time="2025-11-01T00:43:53.890425579Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:43:53.895873 env[1822]: time="2025-11-01T00:43:53.895768484Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:43:53.896367 kubelet[2748]: E1101 00:43:53.896284 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:43:53.898101 kubelet[2748]: E1101 00:43:53.896470 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:43:53.898101 kubelet[2748]: E1101 00:43:53.896637 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4kdwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5lqpx_calico-system(9a6bbdac-9f73-4cc6-aadc-84424d8082ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:43:53.905977 env[1822]: time="2025-11-01T00:43:53.905282913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:43:53.955616 env[1822]: 2025-11-01 00:43:53.914 [WARNING][5265] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" WorkloadEndpoint="ip--172--31--19--28-k8s-whisker--7f96c4696b--wc2sq-eth0" Nov 1 00:43:53.955616 env[1822]: 2025-11-01 00:43:53.914 [INFO][5265] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" Nov 1 00:43:53.955616 env[1822]: 2025-11-01 00:43:53.914 [INFO][5265] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" iface="eth0" netns="" Nov 1 00:43:53.955616 env[1822]: 2025-11-01 00:43:53.914 [INFO][5265] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" Nov 1 00:43:53.955616 env[1822]: 2025-11-01 00:43:53.914 [INFO][5265] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" Nov 1 00:43:53.955616 env[1822]: 2025-11-01 00:43:53.942 [INFO][5272] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" HandleID="k8s-pod-network.cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" Workload="ip--172--31--19--28-k8s-whisker--7f96c4696b--wc2sq-eth0" Nov 1 00:43:53.955616 env[1822]: 2025-11-01 00:43:53.943 [INFO][5272] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:53.955616 env[1822]: 2025-11-01 00:43:53.943 [INFO][5272] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:53.955616 env[1822]: 2025-11-01 00:43:53.949 [WARNING][5272] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" HandleID="k8s-pod-network.cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" Workload="ip--172--31--19--28-k8s-whisker--7f96c4696b--wc2sq-eth0" Nov 1 00:43:53.955616 env[1822]: 2025-11-01 00:43:53.949 [INFO][5272] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" HandleID="k8s-pod-network.cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" Workload="ip--172--31--19--28-k8s-whisker--7f96c4696b--wc2sq-eth0" Nov 1 00:43:53.955616 env[1822]: 2025-11-01 00:43:53.951 [INFO][5272] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:53.955616 env[1822]: 2025-11-01 00:43:53.953 [INFO][5265] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44" Nov 1 00:43:53.956264 env[1822]: time="2025-11-01T00:43:53.955652626Z" level=info msg="TearDown network for sandbox \"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44\" successfully" Nov 1 00:43:53.961549 env[1822]: time="2025-11-01T00:43:53.961504503Z" level=info msg="RemovePodSandbox \"cc8cfebc43bb54b83eb767ecdefa546ef8aaa624bfbe2bc93d44b220993b1a44\" returns successfully" Nov 1 00:43:53.962276 env[1822]: time="2025-11-01T00:43:53.962221808Z" level=info msg="StopPodSandbox for \"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20\"" Nov 1 00:43:54.063047 env[1822]: 2025-11-01 00:43:54.016 [WARNING][5287] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9a6bbdac-9f73-4cc6-aadc-84424d8082ea", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77", Pod:"csi-node-driver-5lqpx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2a264c2bbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:54.063047 env[1822]: 2025-11-01 00:43:54.017 [INFO][5287] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" Nov 1 00:43:54.063047 env[1822]: 2025-11-01 00:43:54.017 [INFO][5287] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" iface="eth0" netns="" Nov 1 00:43:54.063047 env[1822]: 2025-11-01 00:43:54.017 [INFO][5287] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" Nov 1 00:43:54.063047 env[1822]: 2025-11-01 00:43:54.017 [INFO][5287] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" Nov 1 00:43:54.063047 env[1822]: 2025-11-01 00:43:54.046 [INFO][5295] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" HandleID="k8s-pod-network.3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" Workload="ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0" Nov 1 00:43:54.063047 env[1822]: 2025-11-01 00:43:54.046 [INFO][5295] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:54.063047 env[1822]: 2025-11-01 00:43:54.046 [INFO][5295] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:54.063047 env[1822]: 2025-11-01 00:43:54.053 [WARNING][5295] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" HandleID="k8s-pod-network.3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" Workload="ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0" Nov 1 00:43:54.063047 env[1822]: 2025-11-01 00:43:54.053 [INFO][5295] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" HandleID="k8s-pod-network.3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" Workload="ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0" Nov 1 00:43:54.063047 env[1822]: 2025-11-01 00:43:54.055 [INFO][5295] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:54.063047 env[1822]: 2025-11-01 00:43:54.058 [INFO][5287] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" Nov 1 00:43:54.063863 env[1822]: time="2025-11-01T00:43:54.063077360Z" level=info msg="TearDown network for sandbox \"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20\" successfully" Nov 1 00:43:54.063863 env[1822]: time="2025-11-01T00:43:54.063116085Z" level=info msg="StopPodSandbox for \"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20\" returns successfully" Nov 1 00:43:54.063863 env[1822]: time="2025-11-01T00:43:54.063702974Z" level=info msg="RemovePodSandbox for \"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20\"" Nov 1 00:43:54.063863 env[1822]: time="2025-11-01T00:43:54.063742724Z" level=info msg="Forcibly stopping sandbox \"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20\"" Nov 1 00:43:54.152414 env[1822]: time="2025-11-01T00:43:54.150158629Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:43:54.155506 env[1822]: time="2025-11-01T00:43:54.155428006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:43:54.156260 kubelet[2748]: E1101 00:43:54.155949 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:43:54.156260 kubelet[2748]: E1101 00:43:54.156031 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:43:54.156260 kubelet[2748]: E1101 00:43:54.156183 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4kdwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5lqpx_calico-system(9a6bbdac-9f73-4cc6-aadc-84424d8082ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:43:54.158381 kubelet[2748]: E1101 00:43:54.158243 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5lqpx" podUID="9a6bbdac-9f73-4cc6-aadc-84424d8082ea" Nov 1 00:43:54.185021 env[1822]: 2025-11-01 00:43:54.131 [WARNING][5311] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9a6bbdac-9f73-4cc6-aadc-84424d8082ea", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"120943dea4916e52e9e84ede1040664a7404f57b37e46766e065d8ea86a50f77", Pod:"csi-node-driver-5lqpx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2a264c2bbd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:54.185021 env[1822]: 2025-11-01 00:43:54.131 [INFO][5311] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" Nov 1 00:43:54.185021 env[1822]: 2025-11-01 00:43:54.131 [INFO][5311] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" iface="eth0" netns="" Nov 1 00:43:54.185021 env[1822]: 2025-11-01 00:43:54.132 [INFO][5311] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" Nov 1 00:43:54.185021 env[1822]: 2025-11-01 00:43:54.132 [INFO][5311] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" Nov 1 00:43:54.185021 env[1822]: 2025-11-01 00:43:54.165 [INFO][5318] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" HandleID="k8s-pod-network.3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" Workload="ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0" Nov 1 00:43:54.185021 env[1822]: 2025-11-01 00:43:54.165 [INFO][5318] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:54.185021 env[1822]: 2025-11-01 00:43:54.165 [INFO][5318] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:54.185021 env[1822]: 2025-11-01 00:43:54.175 [WARNING][5318] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" HandleID="k8s-pod-network.3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" Workload="ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0" Nov 1 00:43:54.185021 env[1822]: 2025-11-01 00:43:54.175 [INFO][5318] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" HandleID="k8s-pod-network.3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" Workload="ip--172--31--19--28-k8s-csi--node--driver--5lqpx-eth0" Nov 1 00:43:54.185021 env[1822]: 2025-11-01 00:43:54.177 [INFO][5318] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:54.185021 env[1822]: 2025-11-01 00:43:54.182 [INFO][5311] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20" Nov 1 00:43:54.185733 env[1822]: time="2025-11-01T00:43:54.185683369Z" level=info msg="TearDown network for sandbox \"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20\" successfully" Nov 1 00:43:54.192042 env[1822]: time="2025-11-01T00:43:54.191982953Z" level=info msg="RemovePodSandbox \"3f9ccbbef2bbf6bba7119dbd2ed93ddf834e78923b105222fd810706683d9e20\" returns successfully" Nov 1 00:43:54.192530 env[1822]: time="2025-11-01T00:43:54.192503730Z" level=info msg="StopPodSandbox for \"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92\"" Nov 1 00:43:54.288490 env[1822]: 2025-11-01 00:43:54.243 [WARNING][5332] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0", GenerateName:"calico-apiserver-68cc86985f-", Namespace:"calico-apiserver", SelfLink:"", UID:"94ef4085-6c01-4795-a191-98e0030c89bd", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cc86985f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316", Pod:"calico-apiserver-68cc86985f-ffnk6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac721c63fb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:54.288490 env[1822]: 2025-11-01 00:43:54.244 [INFO][5332] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" Nov 1 00:43:54.288490 env[1822]: 2025-11-01 00:43:54.244 [INFO][5332] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" iface="eth0" netns="" Nov 1 00:43:54.288490 env[1822]: 2025-11-01 00:43:54.244 [INFO][5332] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" Nov 1 00:43:54.288490 env[1822]: 2025-11-01 00:43:54.244 [INFO][5332] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" Nov 1 00:43:54.288490 env[1822]: 2025-11-01 00:43:54.273 [INFO][5339] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" HandleID="k8s-pod-network.f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0" Nov 1 00:43:54.288490 env[1822]: 2025-11-01 00:43:54.273 [INFO][5339] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:54.288490 env[1822]: 2025-11-01 00:43:54.273 [INFO][5339] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:54.288490 env[1822]: 2025-11-01 00:43:54.281 [WARNING][5339] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" HandleID="k8s-pod-network.f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0" Nov 1 00:43:54.288490 env[1822]: 2025-11-01 00:43:54.281 [INFO][5339] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" HandleID="k8s-pod-network.f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0" Nov 1 00:43:54.288490 env[1822]: 2025-11-01 00:43:54.284 [INFO][5339] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:54.288490 env[1822]: 2025-11-01 00:43:54.286 [INFO][5332] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" Nov 1 00:43:54.289276 env[1822]: time="2025-11-01T00:43:54.289223713Z" level=info msg="TearDown network for sandbox \"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92\" successfully" Nov 1 00:43:54.289401 env[1822]: time="2025-11-01T00:43:54.289380384Z" level=info msg="StopPodSandbox for \"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92\" returns successfully" Nov 1 00:43:54.290025 env[1822]: time="2025-11-01T00:43:54.289997294Z" level=info msg="RemovePodSandbox for \"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92\"" Nov 1 00:43:54.290216 env[1822]: time="2025-11-01T00:43:54.290160225Z" level=info msg="Forcibly stopping sandbox \"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92\"" Nov 1 00:43:54.350825 env[1822]: time="2025-11-01T00:43:54.350756729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:43:54.411124 env[1822]: 2025-11-01 00:43:54.346 [WARNING][5354] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0", GenerateName:"calico-apiserver-68cc86985f-", Namespace:"calico-apiserver", SelfLink:"", UID:"94ef4085-6c01-4795-a191-98e0030c89bd", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cc86985f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"483a36ba6652c99dadb458034770ad322d47fc43633f598426a83b3b80ade316", Pod:"calico-apiserver-68cc86985f-ffnk6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliac721c63fb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:54.411124 env[1822]: 2025-11-01 00:43:54.346 [INFO][5354] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" Nov 1 00:43:54.411124 env[1822]: 2025-11-01 00:43:54.346 [INFO][5354] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" iface="eth0" netns="" Nov 1 00:43:54.411124 env[1822]: 2025-11-01 00:43:54.347 [INFO][5354] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" Nov 1 00:43:54.411124 env[1822]: 2025-11-01 00:43:54.347 [INFO][5354] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" Nov 1 00:43:54.411124 env[1822]: 2025-11-01 00:43:54.385 [INFO][5362] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" HandleID="k8s-pod-network.f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0" Nov 1 00:43:54.411124 env[1822]: 2025-11-01 00:43:54.385 [INFO][5362] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:54.411124 env[1822]: 2025-11-01 00:43:54.385 [INFO][5362] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:54.411124 env[1822]: 2025-11-01 00:43:54.396 [WARNING][5362] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" HandleID="k8s-pod-network.f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0" Nov 1 00:43:54.411124 env[1822]: 2025-11-01 00:43:54.396 [INFO][5362] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" HandleID="k8s-pod-network.f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--ffnk6-eth0" Nov 1 00:43:54.411124 env[1822]: 2025-11-01 00:43:54.398 [INFO][5362] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:54.411124 env[1822]: 2025-11-01 00:43:54.405 [INFO][5354] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92" Nov 1 00:43:54.411124 env[1822]: time="2025-11-01T00:43:54.408951606Z" level=info msg="TearDown network for sandbox \"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92\" successfully" Nov 1 00:43:54.416168 env[1822]: time="2025-11-01T00:43:54.416096882Z" level=info msg="RemovePodSandbox \"f980ef96a6c83ae2189763c9d6f8dbdf72b76449aaf8dec4700f65aab951ff92\" returns successfully" Nov 1 00:43:54.417096 env[1822]: time="2025-11-01T00:43:54.417065361Z" level=info msg="StopPodSandbox for \"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1\"" Nov 1 00:43:54.504925 env[1822]: 2025-11-01 00:43:54.458 [WARNING][5377] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9d1a8da9-b760-4438-8153-c39262d29176", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 42, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90", Pod:"coredns-668d6bf9bc-k7np4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebcbe53047e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:54.504925 env[1822]: 2025-11-01 00:43:54.458 [INFO][5377] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" Nov 1 00:43:54.504925 env[1822]: 2025-11-01 00:43:54.458 [INFO][5377] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" iface="eth0" netns="" Nov 1 00:43:54.504925 env[1822]: 2025-11-01 00:43:54.458 [INFO][5377] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" Nov 1 00:43:54.504925 env[1822]: 2025-11-01 00:43:54.458 [INFO][5377] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" Nov 1 00:43:54.504925 env[1822]: 2025-11-01 00:43:54.483 [INFO][5385] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" HandleID="k8s-pod-network.184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0" Nov 1 00:43:54.504925 env[1822]: 2025-11-01 00:43:54.483 [INFO][5385] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:54.504925 env[1822]: 2025-11-01 00:43:54.483 [INFO][5385] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:54.504925 env[1822]: 2025-11-01 00:43:54.490 [WARNING][5385] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" HandleID="k8s-pod-network.184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0" Nov 1 00:43:54.504925 env[1822]: 2025-11-01 00:43:54.490 [INFO][5385] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" HandleID="k8s-pod-network.184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0" Nov 1 00:43:54.504925 env[1822]: 2025-11-01 00:43:54.496 [INFO][5385] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:54.504925 env[1822]: 2025-11-01 00:43:54.502 [INFO][5377] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" Nov 1 00:43:54.505774 env[1822]: time="2025-11-01T00:43:54.504964042Z" level=info msg="TearDown network for sandbox \"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1\" successfully" Nov 1 00:43:54.505774 env[1822]: time="2025-11-01T00:43:54.505015562Z" level=info msg="StopPodSandbox for \"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1\" returns successfully" Nov 1 00:43:54.505774 env[1822]: time="2025-11-01T00:43:54.505628766Z" level=info msg="RemovePodSandbox for \"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1\"" Nov 1 00:43:54.505932 env[1822]: time="2025-11-01T00:43:54.505670649Z" level=info msg="Forcibly stopping sandbox \"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1\"" Nov 1 00:43:54.591418 env[1822]: time="2025-11-01T00:43:54.591292244Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:43:54.593851 env[1822]: time="2025-11-01T00:43:54.593763302Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:43:54.595512 kubelet[2748]: E1101 00:43:54.594324 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:43:54.595512 kubelet[2748]: E1101 00:43:54.594415 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:43:54.595512 kubelet[2748]: E1101 00:43:54.594884 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-st25d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68cc86985f-ffnk6_calico-apiserver(94ef4085-6c01-4795-a191-98e0030c89bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:43:54.596207 kubelet[2748]: E1101 00:43:54.596133 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68cc86985f-ffnk6" podUID="94ef4085-6c01-4795-a191-98e0030c89bd" Nov 1 00:43:54.596712 env[1822]: time="2025-11-01T00:43:54.596660313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:43:54.604594 env[1822]: 2025-11-01 00:43:54.546 [WARNING][5400] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9d1a8da9-b760-4438-8153-c39262d29176", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 42, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"6fe4274f05d7a9e35197518262c5fbfe6ccfc5c158e2bcc62dd595f2dfdc5d90", Pod:"coredns-668d6bf9bc-k7np4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebcbe53047e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:54.604594 env[1822]: 2025-11-01 00:43:54.547 [INFO][5400] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" Nov 1 00:43:54.604594 env[1822]: 2025-11-01 00:43:54.547 [INFO][5400] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" iface="eth0" netns="" Nov 1 00:43:54.604594 env[1822]: 2025-11-01 00:43:54.547 [INFO][5400] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" Nov 1 00:43:54.604594 env[1822]: 2025-11-01 00:43:54.547 [INFO][5400] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" Nov 1 00:43:54.604594 env[1822]: 2025-11-01 00:43:54.572 [INFO][5407] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" HandleID="k8s-pod-network.184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0" Nov 1 00:43:54.604594 env[1822]: 2025-11-01 00:43:54.572 [INFO][5407] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:54.604594 env[1822]: 2025-11-01 00:43:54.572 [INFO][5407] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:54.604594 env[1822]: 2025-11-01 00:43:54.591 [WARNING][5407] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" HandleID="k8s-pod-network.184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0" Nov 1 00:43:54.604594 env[1822]: 2025-11-01 00:43:54.591 [INFO][5407] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" HandleID="k8s-pod-network.184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--k7np4-eth0" Nov 1 00:43:54.604594 env[1822]: 2025-11-01 00:43:54.593 [INFO][5407] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:54.604594 env[1822]: 2025-11-01 00:43:54.601 [INFO][5400] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1" Nov 1 00:43:54.606221 env[1822]: time="2025-11-01T00:43:54.604639796Z" level=info msg="TearDown network for sandbox \"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1\" successfully" Nov 1 00:43:54.613023 env[1822]: time="2025-11-01T00:43:54.612932261Z" level=info msg="RemovePodSandbox \"184adde9a51b41802be1890323d743b8d686ccebc0bfbdd8995d0d3586712ce1\" returns successfully" Nov 1 00:43:54.614939 env[1822]: time="2025-11-01T00:43:54.614899116Z" level=info msg="StopPodSandbox for \"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c\"" Nov 1 00:43:54.716741 env[1822]: 2025-11-01 00:43:54.661 [WARNING][5422] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0", GenerateName:"calico-apiserver-68cc86985f-", Namespace:"calico-apiserver", SelfLink:"", UID:"6d3c3149-9beb-44f8-a7ee-d6982872dcbb", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cc86985f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2", Pod:"calico-apiserver-68cc86985f-qb2p9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali98e527ed158", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:54.716741 env[1822]: 2025-11-01 00:43:54.661 [INFO][5422] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" Nov 1 00:43:54.716741 env[1822]: 2025-11-01 00:43:54.661 [INFO][5422] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" iface="eth0" netns="" Nov 1 00:43:54.716741 env[1822]: 2025-11-01 00:43:54.661 [INFO][5422] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" Nov 1 00:43:54.716741 env[1822]: 2025-11-01 00:43:54.661 [INFO][5422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" Nov 1 00:43:54.716741 env[1822]: 2025-11-01 00:43:54.700 [INFO][5429] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" HandleID="k8s-pod-network.f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0" Nov 1 00:43:54.716741 env[1822]: 2025-11-01 00:43:54.700 [INFO][5429] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:54.716741 env[1822]: 2025-11-01 00:43:54.700 [INFO][5429] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:54.716741 env[1822]: 2025-11-01 00:43:54.707 [WARNING][5429] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" HandleID="k8s-pod-network.f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0" Nov 1 00:43:54.716741 env[1822]: 2025-11-01 00:43:54.707 [INFO][5429] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" HandleID="k8s-pod-network.f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0" Nov 1 00:43:54.716741 env[1822]: 2025-11-01 00:43:54.709 [INFO][5429] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:54.716741 env[1822]: 2025-11-01 00:43:54.711 [INFO][5422] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" Nov 1 00:43:54.716741 env[1822]: time="2025-11-01T00:43:54.714208827Z" level=info msg="TearDown network for sandbox \"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c\" successfully" Nov 1 00:43:54.716741 env[1822]: time="2025-11-01T00:43:54.714246383Z" level=info msg="StopPodSandbox for \"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c\" returns successfully" Nov 1 00:43:54.718262 env[1822]: time="2025-11-01T00:43:54.716992670Z" level=info msg="RemovePodSandbox for \"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c\"" Nov 1 00:43:54.718262 env[1822]: time="2025-11-01T00:43:54.717038074Z" level=info msg="Forcibly stopping sandbox \"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c\"" Nov 1 00:43:54.799359 env[1822]: 2025-11-01 00:43:54.755 [WARNING][5443] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0", GenerateName:"calico-apiserver-68cc86985f-", Namespace:"calico-apiserver", SelfLink:"", UID:"6d3c3149-9beb-44f8-a7ee-d6982872dcbb", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68cc86985f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"30429913d59a9c1a438da6224a5de8fb453b655c94dd42cdbddb634d16dae4c2", Pod:"calico-apiserver-68cc86985f-qb2p9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali98e527ed158", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:54.799359 env[1822]: 2025-11-01 00:43:54.756 [INFO][5443] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" Nov 1 00:43:54.799359 env[1822]: 2025-11-01 00:43:54.756 [INFO][5443] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" iface="eth0" netns="" Nov 1 00:43:54.799359 env[1822]: 2025-11-01 00:43:54.756 [INFO][5443] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" Nov 1 00:43:54.799359 env[1822]: 2025-11-01 00:43:54.756 [INFO][5443] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" Nov 1 00:43:54.799359 env[1822]: 2025-11-01 00:43:54.786 [INFO][5450] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" HandleID="k8s-pod-network.f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0" Nov 1 00:43:54.799359 env[1822]: 2025-11-01 00:43:54.786 [INFO][5450] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:54.799359 env[1822]: 2025-11-01 00:43:54.787 [INFO][5450] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:54.799359 env[1822]: 2025-11-01 00:43:54.793 [WARNING][5450] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" HandleID="k8s-pod-network.f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0" Nov 1 00:43:54.799359 env[1822]: 2025-11-01 00:43:54.793 [INFO][5450] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" HandleID="k8s-pod-network.f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" Workload="ip--172--31--19--28-k8s-calico--apiserver--68cc86985f--qb2p9-eth0" Nov 1 00:43:54.799359 env[1822]: 2025-11-01 00:43:54.795 [INFO][5450] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:54.799359 env[1822]: 2025-11-01 00:43:54.797 [INFO][5443] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c" Nov 1 00:43:54.800052 env[1822]: time="2025-11-01T00:43:54.799417801Z" level=info msg="TearDown network for sandbox \"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c\" successfully" Nov 1 00:43:54.805329 env[1822]: time="2025-11-01T00:43:54.805250722Z" level=info msg="RemovePodSandbox \"f63a5e327b41dd034de489e5f7020f4ead96103597a231cd583b3e462c82c30c\" returns successfully" Nov 1 00:43:54.805981 env[1822]: time="2025-11-01T00:43:54.805942614Z" level=info msg="StopPodSandbox for \"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6\"" Nov 1 00:43:54.843000 env[1822]: time="2025-11-01T00:43:54.842951197Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:43:54.845382 env[1822]: time="2025-11-01T00:43:54.845303670Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:43:54.847359 kubelet[2748]: E1101 00:43:54.845801 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:43:54.847359 kubelet[2748]: E1101 00:43:54.845862 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:43:54.847359 kubelet[2748]: E1101 00:43:54.847210 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjd2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6db4456f5f-n6pzz_calico-system(472f8f92-5499-46b7-8902-95424bad4337): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:43:54.848825 kubelet[2748]: E1101 00:43:54.848726 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6db4456f5f-n6pzz" podUID="472f8f92-5499-46b7-8902-95424bad4337" Nov 1 00:43:54.903107 env[1822]: 2025-11-01 00:43:54.853 [WARNING][5466] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"df0a1bba-62a6-45de-8540-a37de4852942", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 42, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162", Pod:"coredns-668d6bf9bc-hj6zc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7f7db156e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:54.903107 env[1822]: 2025-11-01 00:43:54.855 [INFO][5466] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" Nov 1 00:43:54.903107 env[1822]: 2025-11-01 00:43:54.855 [INFO][5466] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" iface="eth0" netns="" Nov 1 00:43:54.903107 env[1822]: 2025-11-01 00:43:54.855 [INFO][5466] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" Nov 1 00:43:54.903107 env[1822]: 2025-11-01 00:43:54.855 [INFO][5466] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" Nov 1 00:43:54.903107 env[1822]: 2025-11-01 00:43:54.888 [INFO][5473] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" HandleID="k8s-pod-network.0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0" Nov 1 00:43:54.903107 env[1822]: 2025-11-01 00:43:54.888 [INFO][5473] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:54.903107 env[1822]: 2025-11-01 00:43:54.888 [INFO][5473] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:54.903107 env[1822]: 2025-11-01 00:43:54.895 [WARNING][5473] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" HandleID="k8s-pod-network.0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0" Nov 1 00:43:54.903107 env[1822]: 2025-11-01 00:43:54.895 [INFO][5473] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" HandleID="k8s-pod-network.0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0" Nov 1 00:43:54.903107 env[1822]: 2025-11-01 00:43:54.897 [INFO][5473] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:54.903107 env[1822]: 2025-11-01 00:43:54.900 [INFO][5466] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" Nov 1 00:43:54.903793 env[1822]: time="2025-11-01T00:43:54.903747187Z" level=info msg="TearDown network for sandbox \"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6\" successfully" Nov 1 00:43:54.903894 env[1822]: time="2025-11-01T00:43:54.903792242Z" level=info msg="StopPodSandbox for \"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6\" returns successfully" Nov 1 00:43:54.904485 env[1822]: time="2025-11-01T00:43:54.904423087Z" level=info msg="RemovePodSandbox for \"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6\"" Nov 1 00:43:54.904646 env[1822]: time="2025-11-01T00:43:54.904483795Z" level=info msg="Forcibly stopping sandbox \"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6\"" Nov 1 00:43:54.987621 env[1822]: 2025-11-01 00:43:54.944 [WARNING][5487] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"df0a1bba-62a6-45de-8540-a37de4852942", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 42, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"9ad1a997f0cb106b32976fa3603b62b719cabaa2b242fd09ed4c053564de7162", Pod:"coredns-668d6bf9bc-hj6zc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib7f7db156e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:54.987621 env[1822]: 2025-11-01 00:43:54.944 [INFO][5487] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" Nov 1 00:43:54.987621 env[1822]: 2025-11-01 00:43:54.944 [INFO][5487] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" iface="eth0" netns="" Nov 1 00:43:54.987621 env[1822]: 2025-11-01 00:43:54.944 [INFO][5487] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" Nov 1 00:43:54.987621 env[1822]: 2025-11-01 00:43:54.944 [INFO][5487] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" Nov 1 00:43:54.987621 env[1822]: 2025-11-01 00:43:54.969 [INFO][5494] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" HandleID="k8s-pod-network.0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0" Nov 1 00:43:54.987621 env[1822]: 2025-11-01 00:43:54.969 [INFO][5494] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:54.987621 env[1822]: 2025-11-01 00:43:54.969 [INFO][5494] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:54.987621 env[1822]: 2025-11-01 00:43:54.981 [WARNING][5494] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" HandleID="k8s-pod-network.0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0" Nov 1 00:43:54.987621 env[1822]: 2025-11-01 00:43:54.981 [INFO][5494] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" HandleID="k8s-pod-network.0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" Workload="ip--172--31--19--28-k8s-coredns--668d6bf9bc--hj6zc-eth0" Nov 1 00:43:54.987621 env[1822]: 2025-11-01 00:43:54.983 [INFO][5494] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:54.987621 env[1822]: 2025-11-01 00:43:54.985 [INFO][5487] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6" Nov 1 00:43:54.987621 env[1822]: time="2025-11-01T00:43:54.987549478Z" level=info msg="TearDown network for sandbox \"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6\" successfully" Nov 1 00:43:54.994772 env[1822]: time="2025-11-01T00:43:54.994724611Z" level=info msg="RemovePodSandbox \"0cc937be4395e55eb3190142354211cef9579553f30bada09fc82623c38ef2a6\" returns successfully" Nov 1 00:43:54.995241 env[1822]: time="2025-11-01T00:43:54.995214777Z" level=info msg="StopPodSandbox for \"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853\"" Nov 1 00:43:55.085606 env[1822]: 2025-11-01 00:43:55.043 [WARNING][5509] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0", GenerateName:"calico-kube-controllers-6db4456f5f-", Namespace:"calico-system", SelfLink:"", UID:"472f8f92-5499-46b7-8902-95424bad4337", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6db4456f5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da", Pod:"calico-kube-controllers-6db4456f5f-n6pzz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6c5f1534742", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:55.085606 env[1822]: 2025-11-01 00:43:55.044 [INFO][5509] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" Nov 1 00:43:55.085606 env[1822]: 2025-11-01 00:43:55.044 [INFO][5509] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" iface="eth0" netns="" Nov 1 00:43:55.085606 env[1822]: 2025-11-01 00:43:55.044 [INFO][5509] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" Nov 1 00:43:55.085606 env[1822]: 2025-11-01 00:43:55.044 [INFO][5509] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" Nov 1 00:43:55.085606 env[1822]: 2025-11-01 00:43:55.071 [INFO][5517] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" HandleID="k8s-pod-network.9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" Workload="ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0" Nov 1 00:43:55.085606 env[1822]: 2025-11-01 00:43:55.072 [INFO][5517] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:55.085606 env[1822]: 2025-11-01 00:43:55.072 [INFO][5517] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:55.085606 env[1822]: 2025-11-01 00:43:55.079 [WARNING][5517] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" HandleID="k8s-pod-network.9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" Workload="ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0" Nov 1 00:43:55.085606 env[1822]: 2025-11-01 00:43:55.079 [INFO][5517] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" HandleID="k8s-pod-network.9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" Workload="ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0" Nov 1 00:43:55.085606 env[1822]: 2025-11-01 00:43:55.081 [INFO][5517] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:55.085606 env[1822]: 2025-11-01 00:43:55.083 [INFO][5509] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" Nov 1 00:43:55.088201 env[1822]: time="2025-11-01T00:43:55.087234492Z" level=info msg="TearDown network for sandbox \"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853\" successfully" Nov 1 00:43:55.088201 env[1822]: time="2025-11-01T00:43:55.087269098Z" level=info msg="StopPodSandbox for \"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853\" returns successfully" Nov 1 00:43:55.088201 env[1822]: time="2025-11-01T00:43:55.087919206Z" level=info msg="RemovePodSandbox for \"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853\"" Nov 1 00:43:55.088201 env[1822]: time="2025-11-01T00:43:55.087958594Z" level=info msg="Forcibly stopping sandbox \"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853\"" Nov 1 00:43:55.170350 env[1822]: 2025-11-01 00:43:55.128 [WARNING][5531] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0", GenerateName:"calico-kube-controllers-6db4456f5f-", Namespace:"calico-system", SelfLink:"", UID:"472f8f92-5499-46b7-8902-95424bad4337", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6db4456f5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-28", ContainerID:"8485d7b2b09a04d9064c10b4165a754a841ba792ff062b6fb5d24fdb5fe0c5da", Pod:"calico-kube-controllers-6db4456f5f-n6pzz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6c5f1534742", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:43:55.170350 env[1822]: 2025-11-01 00:43:55.128 [INFO][5531] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" Nov 1 00:43:55.170350 env[1822]: 2025-11-01 00:43:55.128 [INFO][5531] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" iface="eth0" netns="" Nov 1 00:43:55.170350 env[1822]: 2025-11-01 00:43:55.128 [INFO][5531] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" Nov 1 00:43:55.170350 env[1822]: 2025-11-01 00:43:55.128 [INFO][5531] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" Nov 1 00:43:55.170350 env[1822]: 2025-11-01 00:43:55.153 [INFO][5538] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" HandleID="k8s-pod-network.9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" Workload="ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0" Nov 1 00:43:55.170350 env[1822]: 2025-11-01 00:43:55.155 [INFO][5538] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:43:55.170350 env[1822]: 2025-11-01 00:43:55.155 [INFO][5538] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:43:55.170350 env[1822]: 2025-11-01 00:43:55.163 [WARNING][5538] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" HandleID="k8s-pod-network.9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" Workload="ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0" Nov 1 00:43:55.170350 env[1822]: 2025-11-01 00:43:55.163 [INFO][5538] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" HandleID="k8s-pod-network.9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" Workload="ip--172--31--19--28-k8s-calico--kube--controllers--6db4456f5f--n6pzz-eth0" Nov 1 00:43:55.170350 env[1822]: 2025-11-01 00:43:55.165 [INFO][5538] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:43:55.170350 env[1822]: 2025-11-01 00:43:55.168 [INFO][5531] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853" Nov 1 00:43:55.171222 env[1822]: time="2025-11-01T00:43:55.170401333Z" level=info msg="TearDown network for sandbox \"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853\" successfully" Nov 1 00:43:55.179039 env[1822]: time="2025-11-01T00:43:55.178559830Z" level=info msg="RemovePodSandbox \"9e322bc1c122544b9a6a767532b782639df8d7954ef9a0a77810e8614fc4c853\" returns successfully" Nov 1 00:43:56.641721 kernel: kauditd_printk_skb: 35 callbacks suppressed Nov 1 00:43:56.642084 kernel: audit: type=1130 audit(1761957836.636:474): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.19.28:22-147.75.109.163:53528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:56.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.19.28:22-147.75.109.163:53528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:43:56.637382 systemd[1]: Started sshd@12-172.31.19.28:22-147.75.109.163:53528.service. Nov 1 00:43:56.861000 audit[5554]: USER_ACCT pid=5554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:56.863194 sshd[5554]: Accepted publickey for core from 147.75.109.163 port 53528 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:43:56.870382 kernel: audit: type=1101 audit(1761957836.861:475): pid=5554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:56.869000 audit[5554]: CRED_ACQ pid=5554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:56.872941 sshd[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:43:56.883231 systemd[1]: Started session-13.scope. Nov 1 00:43:56.885009 kernel: audit: type=1103 audit(1761957836.869:476): pid=5554 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:56.885109 kernel: audit: type=1006 audit(1761957836.869:477): pid=5554 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Nov 1 00:43:56.884418 systemd-logind[1803]: New session 13 of user core. Nov 1 00:43:56.869000 audit[5554]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0b2be290 a2=3 a3=0 items=0 ppid=1 pid=5554 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.897374 kernel: audit: type=1300 audit(1761957836.869:477): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0b2be290 a2=3 a3=0 items=0 ppid=1 pid=5554 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:43:56.869000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:43:56.904519 kernel: audit: type=1327 audit(1761957836.869:477): proctitle=737368643A20636F7265205B707269765D Nov 1 00:43:56.904701 kernel: audit: type=1105 audit(1761957836.897:478): pid=5554 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:56.897000 audit[5554]: USER_START pid=5554 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:56.908000 audit[5557]: CRED_ACQ pid=5557 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:56.918327 kernel: audit: type=1103 audit(1761957836.908:479): pid=5557 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:57.288769 sshd[5554]: pam_unix(sshd:session): session closed for user core Nov 1 00:43:57.288000 audit[5554]: USER_END pid=5554 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:57.293421 systemd-logind[1803]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:43:57.294825 systemd[1]: sshd@12-172.31.19.28:22-147.75.109.163:53528.service: Deactivated successfully. Nov 1 00:43:57.295978 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:43:57.297710 systemd-logind[1803]: Removed session 13. Nov 1 00:43:57.309037 kernel: audit: type=1106 audit(1761957837.288:480): pid=5554 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:57.309160 kernel: audit: type=1104 audit(1761957837.289:481): pid=5554 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:57.289000 audit[5554]: CRED_DISP pid=5554 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:43:57.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.19.28:22-147.75.109.163:53528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:02.318121 systemd[1]: Started sshd@13-172.31.19.28:22-147.75.109.163:50426.service. Nov 1 00:44:02.358789 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:44:02.358916 kernel: audit: type=1130 audit(1761957842.321:483): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.19.28:22-147.75.109.163:50426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:02.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.19.28:22-147.75.109.163:50426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:02.586792 sshd[5572]: Accepted publickey for core from 147.75.109.163 port 50426 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:44:02.583000 audit[5572]: USER_ACCT pid=5572 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:02.625600 kernel: audit: type=1101 audit(1761957842.583:484): pid=5572 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:02.630000 audit[5572]: CRED_ACQ pid=5572 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:02.632939 sshd[5572]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:02.660486 kernel: audit: type=1103 audit(1761957842.630:485): pid=5572 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:02.685627 kernel: audit: type=1006 audit(1761957842.631:486): pid=5572 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Nov 1 00:44:02.685786 kernel: audit: type=1300 audit(1761957842.631:486): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed3b592f0 a2=3 a3=0 items=0 ppid=1 pid=5572 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:02.631000 audit[5572]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed3b592f0 a2=3 a3=0 items=0 ppid=1 pid=5572 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:02.701149 systemd[1]: Started session-14.scope. Nov 1 00:44:02.702449 systemd-logind[1803]: New session 14 of user core. Nov 1 00:44:02.631000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:02.722499 kernel: audit: type=1327 audit(1761957842.631:486): proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:02.722000 audit[5572]: USER_START pid=5572 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:02.738430 kernel: audit: type=1105 audit(1761957842.722:487): pid=5572 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:02.739154 kernel: audit: type=1103 audit(1761957842.725:488): pid=5575 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:02.725000 audit[5575]: CRED_ACQ pid=5575 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:03.070811 sshd[5572]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:03.070000 audit[5572]: USER_END pid=5572 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:03.084384 kernel: audit: type=1106 audit(1761957843.070:489): pid=5572 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:03.086265 systemd[1]: sshd@13-172.31.19.28:22-147.75.109.163:50426.service: Deactivated successfully. Nov 1 00:44:03.087667 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:44:03.089734 systemd-logind[1803]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:44:03.070000 audit[5572]: CRED_DISP pid=5572 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:03.091937 systemd-logind[1803]: Removed session 14. Nov 1 00:44:03.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.19.28:22-147.75.109.163:50426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:03.100425 kernel: audit: type=1104 audit(1761957843.070:490): pid=5572 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:05.349922 kubelet[2748]: E1101 00:44:05.349885 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6db4456f5f-n6pzz" podUID="472f8f92-5499-46b7-8902-95424bad4337" Nov 1 00:44:05.352257 kubelet[2748]: E1101 00:44:05.352224 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dn6kn" podUID="6cd31c79-d021-4671-b1b1-16d458644a79" Nov 1 00:44:06.351434 kubelet[2748]: E1101 00:44:06.351385 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78c5cf549b-vmbkl" podUID="3c464f2f-5c33-4de3-9c8e-29f197089e35" Nov 1 00:44:06.354014 kubelet[2748]: E1101 00:44:06.352514 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68cc86985f-qb2p9" podUID="6d3c3149-9beb-44f8-a7ee-d6982872dcbb" Nov 1 00:44:07.350047 kubelet[2748]: E1101 00:44:07.350017 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68cc86985f-ffnk6" podUID="94ef4085-6c01-4795-a191-98e0030c89bd" Nov 1 00:44:07.350467 kubelet[2748]: E1101 00:44:07.350440 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5lqpx" podUID="9a6bbdac-9f73-4cc6-aadc-84424d8082ea" Nov 1 00:44:08.096440 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:44:08.096564 kernel: audit: type=1130 audit(1761957848.092:492): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.19.28:22-147.75.109.163:50436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:08.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.19.28:22-147.75.109.163:50436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:08.093959 systemd[1]: Started sshd@14-172.31.19.28:22-147.75.109.163:50436.service. Nov 1 00:44:08.253000 audit[5585]: USER_ACCT pid=5585 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:08.255618 sshd[5585]: Accepted publickey for core from 147.75.109.163 port 50436 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:44:08.261000 audit[5585]: CRED_ACQ pid=5585 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:08.263330 sshd[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:08.270029 kernel: audit: type=1101 audit(1761957848.253:493): pid=5585 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:08.270145 kernel: audit: type=1103 audit(1761957848.261:494): pid=5585 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:08.270178 kernel: audit: type=1006 audit(1761957848.261:495): pid=5585 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Nov 1 00:44:08.269789 systemd[1]: Started session-15.scope. Nov 1 00:44:08.270760 systemd-logind[1803]: New session 15 of user core. Nov 1 00:44:08.261000 audit[5585]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe910ecf10 a2=3 a3=0 items=0 ppid=1 pid=5585 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.281262 kernel: audit: type=1300 audit(1761957848.261:495): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe910ecf10 a2=3 a3=0 items=0 ppid=1 pid=5585 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:08.261000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:08.287412 kernel: audit: type=1327 audit(1761957848.261:495): proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:08.280000 audit[5585]: USER_START pid=5585 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:08.282000 audit[5588]: CRED_ACQ pid=5588 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:08.305218 kernel: audit: type=1105 audit(1761957848.280:496): pid=5585 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:08.305411 kernel: audit: type=1103 audit(1761957848.282:497): pid=5588 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:08.479763 sshd[5585]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:08.479000 audit[5585]: USER_END pid=5585 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:08.480000 audit[5585]: CRED_DISP pid=5585 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:08.490010 systemd[1]: sshd@14-172.31.19.28:22-147.75.109.163:50436.service: Deactivated successfully. Nov 1 00:44:08.490939 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:44:08.494749 kernel: audit: type=1106 audit(1761957848.479:498): pid=5585 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:08.494879 kernel: audit: type=1104 audit(1761957848.480:499): pid=5585 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:08.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.19.28:22-147.75.109.163:50436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:08.496419 systemd-logind[1803]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:44:08.497930 systemd-logind[1803]: Removed session 15. Nov 1 00:44:13.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.19.28:22-147.75.109.163:52068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:13.504508 systemd[1]: Started sshd@15-172.31.19.28:22-147.75.109.163:52068.service. Nov 1 00:44:13.506277 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:44:13.506371 kernel: audit: type=1130 audit(1761957853.503:501): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.19.28:22-147.75.109.163:52068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:13.671000 audit[5598]: USER_ACCT pid=5598 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:13.673161 sshd[5598]: Accepted publickey for core from 147.75.109.163 port 52068 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:44:13.680360 kernel: audit: type=1101 audit(1761957853.671:502): pid=5598 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:13.679000 audit[5598]: CRED_ACQ pid=5598 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:13.681705 sshd[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:13.689734 kernel: audit: type=1103 audit(1761957853.679:503): pid=5598 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:13.689868 kernel: audit: type=1006 audit(1761957853.679:504): pid=5598 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Nov 1 00:44:13.679000 audit[5598]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc94390dc0 a2=3 a3=0 items=0 ppid=1 pid=5598 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:13.700631 systemd[1]: Started session-16.scope. Nov 1 00:44:13.701327 systemd-logind[1803]: New session 16 of user core. Nov 1 00:44:13.705552 kernel: audit: type=1300 audit(1761957853.679:504): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc94390dc0 a2=3 a3=0 items=0 ppid=1 pid=5598 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:13.679000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:13.712528 kernel: audit: type=1327 audit(1761957853.679:504): proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:13.716687 kernel: audit: type=1105 audit(1761957853.711:505): pid=5598 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:13.711000 audit[5598]: USER_START pid=5598 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:13.724390 kernel: audit: type=1103 audit(1761957853.716:506): pid=5601 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:13.716000 audit[5601]: CRED_ACQ pid=5601 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:13.940145 sshd[5598]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:13.947000 audit[5598]: USER_END pid=5598 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:13.958395 kernel: audit: type=1106 audit(1761957853.947:507): pid=5598 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:13.960073 systemd[1]: sshd@15-172.31.19.28:22-147.75.109.163:52068.service: Deactivated successfully. Nov 1 00:44:13.947000 audit[5598]: CRED_DISP pid=5598 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:13.965417 systemd[1]: Started sshd@16-172.31.19.28:22-147.75.109.163:52072.service. Nov 1 00:44:13.965944 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:44:13.968645 systemd-logind[1803]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:44:13.971048 systemd-logind[1803]: Removed session 16. Nov 1 00:44:13.974847 kernel: audit: type=1104 audit(1761957853.947:508): pid=5598 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:13.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.19.28:22-147.75.109.163:52068 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:13.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.19.28:22-147.75.109.163:52072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:14.155000 audit[5610]: USER_ACCT pid=5610 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:14.157322 sshd[5610]: Accepted publickey for core from 147.75.109.163 port 52072 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:44:14.157000 audit[5610]: CRED_ACQ pid=5610 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:14.157000 audit[5610]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeae1df020 a2=3 a3=0 items=0 ppid=1 pid=5610 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:14.157000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:14.159083 sshd[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:14.165347 systemd[1]: Started session-17.scope. Nov 1 00:44:14.165862 systemd-logind[1803]: New session 17 of user core. Nov 1 00:44:14.171000 audit[5610]: USER_START pid=5610 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:14.173000 audit[5613]: CRED_ACQ pid=5613 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:16.691865 systemd[1]: run-containerd-runc-k8s.io-8af868a050fed176d64ca5615531f2663c1dc93bce5246a2d030017002f6145e-runc.z9QjQG.mount: Deactivated successfully. Nov 1 00:44:17.867206 sshd[5610]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:17.871000 audit[5610]: USER_END pid=5610 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:17.872000 audit[5610]: CRED_DISP pid=5610 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:17.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.19.28:22-147.75.109.163:52074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:17.884990 systemd[1]: Started sshd@17-172.31.19.28:22-147.75.109.163:52074.service. Nov 1 00:44:17.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.19.28:22-147.75.109.163:52072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:17.886856 systemd[1]: sshd@16-172.31.19.28:22-147.75.109.163:52072.service: Deactivated successfully. Nov 1 00:44:17.891045 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:44:17.892333 systemd-logind[1803]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:44:17.898741 systemd-logind[1803]: Removed session 17. Nov 1 00:44:18.101681 sshd[5646]: Accepted publickey for core from 147.75.109.163 port 52074 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:44:18.100000 audit[5646]: USER_ACCT pid=5646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:18.102000 audit[5646]: CRED_ACQ pid=5646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:18.102000 audit[5646]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda05a73c0 a2=3 a3=0 items=0 ppid=1 pid=5646 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:18.102000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:18.104813 sshd[5646]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:18.112540 systemd[1]: Started session-18.scope. Nov 1 00:44:18.114158 systemd-logind[1803]: New session 18 of user core. Nov 1 00:44:18.119000 audit[5646]: USER_START pid=5646 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:18.121000 audit[5651]: CRED_ACQ pid=5651 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:18.370579 env[1822]: time="2025-11-01T00:44:18.370467005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:44:18.603106 env[1822]: time="2025-11-01T00:44:18.603050075Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:18.605291 env[1822]: time="2025-11-01T00:44:18.605227676Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:44:18.606239 kubelet[2748]: E1101 00:44:18.605589 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:44:18.606239 kubelet[2748]: E1101 00:44:18.605644 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:44:18.606239 kubelet[2748]: E1101 00:44:18.605869 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjd2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6db4456f5f-n6pzz_calico-system(472f8f92-5499-46b7-8902-95424bad4337): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:18.606941 env[1822]: time="2025-11-01T00:44:18.606650961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:44:18.607170 kubelet[2748]: E1101 00:44:18.607096 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6db4456f5f-n6pzz" podUID="472f8f92-5499-46b7-8902-95424bad4337" Nov 1 00:44:18.849510 env[1822]: time="2025-11-01T00:44:18.849456656Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:18.851809 env[1822]: time="2025-11-01T00:44:18.851748031Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:44:18.852164 kubelet[2748]: E1101 00:44:18.852107 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:44:18.852237 kubelet[2748]: E1101 00:44:18.852175 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:44:18.852370 kubelet[2748]: E1101 00:44:18.852303 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b7379b16cfad4b00a1e9214c9508a19a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8vkgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78c5cf549b-vmbkl_calico-system(3c464f2f-5c33-4de3-9c8e-29f197089e35): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:18.855977 env[1822]: time="2025-11-01T00:44:18.855883974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:44:19.105299 env[1822]: time="2025-11-01T00:44:19.105242222Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:19.108446 env[1822]: time="2025-11-01T00:44:19.108185068Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:44:19.108876 kubelet[2748]: E1101 00:44:19.108827 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:44:19.109033 kubelet[2748]: E1101 00:44:19.108890 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:44:19.109606 kubelet[2748]: E1101 00:44:19.109464 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8vkgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78c5cf549b-vmbkl_calico-system(3c464f2f-5c33-4de3-9c8e-29f197089e35): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:19.111106 kubelet[2748]: E1101 00:44:19.111008 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78c5cf549b-vmbkl" podUID="3c464f2f-5c33-4de3-9c8e-29f197089e35" Nov 1 00:44:19.486402 sshd[5646]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:19.502068 kernel: kauditd_printk_skb: 20 callbacks suppressed Nov 1 00:44:19.502233 kernel: audit: type=1106 audit(1761957859.487:525): pid=5646 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:19.487000 audit[5646]: USER_END pid=5646 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:19.503840 systemd[1]: sshd@17-172.31.19.28:22-147.75.109.163:52074.service: Deactivated successfully. Nov 1 00:44:19.504973 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:44:19.507182 systemd-logind[1803]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:44:19.513377 systemd[1]: Started sshd@18-172.31.19.28:22-147.75.109.163:52084.service. Nov 1 00:44:19.517260 systemd-logind[1803]: Removed session 18. Nov 1 00:44:19.531873 kernel: audit: type=1104 audit(1761957859.487:526): pid=5646 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:19.487000 audit[5646]: CRED_DISP pid=5646 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:19.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.19.28:22-147.75.109.163:52074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:19.547536 kernel: audit: type=1131 audit(1761957859.502:527): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.19.28:22-147.75.109.163:52074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:19.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.19.28:22-147.75.109.163:52084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:19.559491 kernel: audit: type=1130 audit(1761957859.512:528): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.19.28:22-147.75.109.163:52084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:19.541000 audit[5664]: NETFILTER_CFG table=filter:131 family=2 entries=26 op=nft_register_rule pid=5664 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:19.580709 kernel: audit: type=1325 audit(1761957859.541:529): table=filter:131 family=2 entries=26 op=nft_register_rule pid=5664 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:19.580818 kernel: audit: type=1300 audit(1761957859.541:529): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc99cfae50 a2=0 a3=7ffc99cfae3c items=0 ppid=2891 pid=5664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:19.541000 audit[5664]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc99cfae50 a2=0 a3=7ffc99cfae3c items=0 ppid=2891 pid=5664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:19.588517 kernel: audit: type=1327 audit(1761957859.541:529): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:19.541000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:19.564000 audit[5664]: NETFILTER_CFG table=nat:132 family=2 entries=20 op=nft_register_rule pid=5664 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:19.596227 kernel: audit: type=1325 audit(1761957859.564:530): table=nat:132 family=2 entries=20 op=nft_register_rule pid=5664 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:19.564000 audit[5664]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc99cfae50 a2=0 a3=0 items=0 ppid=2891 pid=5664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:19.611422 kernel: audit: type=1300 audit(1761957859.564:530): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc99cfae50 a2=0 a3=0 items=0 ppid=2891 pid=5664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:19.564000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:19.620012 kernel: audit: type=1327 audit(1761957859.564:530): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:19.620000 audit[5670]: NETFILTER_CFG table=filter:133 family=2 entries=38 op=nft_register_rule pid=5670 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:19.620000 audit[5670]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffff817d1d0 a2=0 a3=7ffff817d1bc items=0 ppid=2891 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:19.620000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:19.624000 audit[5670]: NETFILTER_CFG table=nat:134 family=2 entries=20 op=nft_register_rule pid=5670 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:19.624000 audit[5670]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffff817d1d0 a2=0 a3=0 items=0 ppid=2891 pid=5670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:19.624000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:19.756000 audit[5667]: USER_ACCT pid=5667 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:19.760525 sshd[5667]: Accepted publickey for core from 147.75.109.163 port 52084 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:44:19.759000 audit[5667]: CRED_ACQ pid=5667 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:19.759000 audit[5667]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb3cd4180 a2=3 a3=0 items=0 ppid=1 pid=5667 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:19.759000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:19.761807 sshd[5667]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:19.768513 systemd[1]: Started session-19.scope. Nov 1 00:44:19.769661 systemd-logind[1803]: New session 19 of user core. Nov 1 00:44:19.777000 audit[5667]: USER_START pid=5667 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:19.780000 audit[5672]: CRED_ACQ pid=5672 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:20.351303 env[1822]: time="2025-11-01T00:44:20.351259522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:44:20.599103 env[1822]: time="2025-11-01T00:44:20.599026401Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:20.601638 env[1822]: time="2025-11-01T00:44:20.601487706Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:44:20.601872 kubelet[2748]: E1101 00:44:20.601826 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:20.602367 kubelet[2748]: E1101 00:44:20.601904 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:20.602367 kubelet[2748]: E1101 00:44:20.602198 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ghsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68cc86985f-qb2p9_calico-apiserver(6d3c3149-9beb-44f8-a7ee-d6982872dcbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:20.603212 env[1822]: time="2025-11-01T00:44:20.603110456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:44:20.603562 kubelet[2748]: E1101 00:44:20.603519 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68cc86985f-qb2p9" podUID="6d3c3149-9beb-44f8-a7ee-d6982872dcbb" Nov 1 00:44:20.650737 sshd[5667]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:20.651000 audit[5667]: USER_END pid=5667 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:20.651000 audit[5667]: CRED_DISP pid=5667 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:20.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.19.28:22-147.75.109.163:52084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:20.655455 systemd-logind[1803]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:44:20.655705 systemd[1]: sshd@18-172.31.19.28:22-147.75.109.163:52084.service: Deactivated successfully. Nov 1 00:44:20.656572 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:44:20.658310 systemd-logind[1803]: Removed session 19. Nov 1 00:44:20.672957 systemd[1]: Started sshd@19-172.31.19.28:22-147.75.109.163:51396.service. Nov 1 00:44:20.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.19.28:22-147.75.109.163:51396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:20.841250 env[1822]: time="2025-11-01T00:44:20.841075730Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:20.843295 env[1822]: time="2025-11-01T00:44:20.843230573Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:44:20.843501 kubelet[2748]: E1101 00:44:20.843467 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:44:20.843577 kubelet[2748]: E1101 00:44:20.843515 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:44:20.843717 kubelet[2748]: E1101 00:44:20.843649 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wplvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dn6kn_calico-system(6cd31c79-d021-4671-b1b1-16d458644a79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:20.845143 kubelet[2748]: E1101 00:44:20.845109 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dn6kn" podUID="6cd31c79-d021-4671-b1b1-16d458644a79" Nov 1 00:44:20.867000 audit[5680]: USER_ACCT pid=5680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:20.869013 sshd[5680]: Accepted publickey for core from 147.75.109.163 port 51396 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:44:20.869000 audit[5680]: CRED_ACQ pid=5680 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:20.869000 audit[5680]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff998f9f50 a2=3 a3=0 items=0 ppid=1 pid=5680 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:20.869000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:20.872296 sshd[5680]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:20.879093 systemd-logind[1803]: New session 20 of user core. Nov 1 00:44:20.879224 systemd[1]: Started session-20.scope. Nov 1 00:44:20.882000 audit[5680]: USER_START pid=5680 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:20.884000 audit[5683]: CRED_ACQ pid=5683 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:21.212373 sshd[5680]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:21.211000 audit[5680]: USER_END pid=5680 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:21.212000 audit[5680]: CRED_DISP pid=5680 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:21.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.19.28:22-147.75.109.163:51396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:21.215611 systemd-logind[1803]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:44:21.215768 systemd[1]: sshd@19-172.31.19.28:22-147.75.109.163:51396.service: Deactivated successfully. Nov 1 00:44:21.216665 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:44:21.217095 systemd-logind[1803]: Removed session 20. Nov 1 00:44:21.351915 env[1822]: time="2025-11-01T00:44:21.351502793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:44:21.596751 env[1822]: time="2025-11-01T00:44:21.596554326Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:21.599167 env[1822]: time="2025-11-01T00:44:21.598962801Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:44:21.599371 kubelet[2748]: E1101 00:44:21.599298 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:44:21.599445 kubelet[2748]: E1101 00:44:21.599383 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:44:21.599610 kubelet[2748]: E1101 00:44:21.599543 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4kdwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5lqpx_calico-system(9a6bbdac-9f73-4cc6-aadc-84424d8082ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:21.605789 env[1822]: time="2025-11-01T00:44:21.605738773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:44:21.839052 env[1822]: time="2025-11-01T00:44:21.838806395Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:21.841171 env[1822]: time="2025-11-01T00:44:21.841080608Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:44:21.841460 kubelet[2748]: E1101 00:44:21.841413 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:44:21.841875 kubelet[2748]: E1101 00:44:21.841474 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:44:21.841875 kubelet[2748]: E1101 00:44:21.841622 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4kdwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5lqpx_calico-system(9a6bbdac-9f73-4cc6-aadc-84424d8082ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:21.842808 kubelet[2748]: E1101 00:44:21.842751 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5lqpx" podUID="9a6bbdac-9f73-4cc6-aadc-84424d8082ea" Nov 1 00:44:22.350438 env[1822]: time="2025-11-01T00:44:22.350208806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:44:22.598041 env[1822]: time="2025-11-01T00:44:22.597867376Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:44:22.600057 env[1822]: time="2025-11-01T00:44:22.599986851Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:44:22.617595 kubelet[2748]: E1101 00:44:22.617470 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:22.617595 kubelet[2748]: E1101 00:44:22.617528 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:44:22.617770 kubelet[2748]: E1101 00:44:22.617650 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-st25d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68cc86985f-ffnk6_calico-apiserver(94ef4085-6c01-4795-a191-98e0030c89bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:44:22.618877 kubelet[2748]: E1101 00:44:22.618760 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68cc86985f-ffnk6" podUID="94ef4085-6c01-4795-a191-98e0030c89bd" Nov 1 00:44:25.135000 audit[5696]: NETFILTER_CFG table=filter:135 family=2 entries=26 op=nft_register_rule pid=5696 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:25.141397 kernel: kauditd_printk_skb: 27 callbacks suppressed Nov 1 00:44:25.141532 kernel: audit: type=1325 audit(1761957865.135:550): table=filter:135 family=2 entries=26 op=nft_register_rule pid=5696 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:25.135000 audit[5696]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffc8c97570 a2=0 a3=7fffc8c9755c items=0 ppid=2891 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:25.154843 kernel: audit: type=1300 audit(1761957865.135:550): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffc8c97570 a2=0 a3=7fffc8c9755c items=0 ppid=2891 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:25.135000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:25.159435 kernel: audit: type=1327 audit(1761957865.135:550): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:25.159581 kernel: audit: type=1325 audit(1761957865.147:551): table=nat:136 family=2 entries=104 op=nft_register_chain pid=5696 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:25.147000 audit[5696]: NETFILTER_CFG table=nat:136 family=2 entries=104 op=nft_register_chain pid=5696 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 00:44:25.165391 kernel: audit: type=1300 audit(1761957865.147:551): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fffc8c97570 a2=0 a3=7fffc8c9755c items=0 ppid=2891 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:25.147000 audit[5696]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fffc8c97570 a2=0 a3=7fffc8c9755c items=0 ppid=2891 pid=5696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:25.147000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:25.178972 kernel: audit: type=1327 audit(1761957865.147:551): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 00:44:26.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.19.28:22-147.75.109.163:51406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:26.236108 systemd[1]: Started sshd@20-172.31.19.28:22-147.75.109.163:51406.service. Nov 1 00:44:26.243362 kernel: audit: type=1130 audit(1761957866.234:552): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.19.28:22-147.75.109.163:51406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:26.390000 audit[5697]: USER_ACCT pid=5697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:26.392566 sshd[5697]: Accepted publickey for core from 147.75.109.163 port 51406 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:44:26.401556 kernel: audit: type=1101 audit(1761957866.390:553): pid=5697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:26.402545 sshd[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:26.399000 audit[5697]: CRED_ACQ pid=5697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:26.415356 kernel: audit: type=1103 audit(1761957866.399:554): pid=5697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:26.415501 kernel: audit: type=1006 audit(1761957866.399:555): pid=5697 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Nov 1 00:44:26.399000 audit[5697]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc722cf760 a2=3 a3=0 items=0 ppid=1 pid=5697 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:26.399000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:26.423152 systemd[1]: Started session-21.scope. Nov 1 00:44:26.424435 systemd-logind[1803]: New session 21 of user core. Nov 1 00:44:26.431000 audit[5697]: USER_START pid=5697 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:26.433000 audit[5700]: CRED_ACQ pid=5700 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:26.626280 sshd[5697]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:26.626000 audit[5697]: USER_END pid=5697 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:26.626000 audit[5697]: CRED_DISP pid=5697 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:26.629604 systemd[1]: sshd@20-172.31.19.28:22-147.75.109.163:51406.service: Deactivated successfully. Nov 1 00:44:26.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.19.28:22-147.75.109.163:51406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:26.631436 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:44:26.632172 systemd-logind[1803]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:44:26.633496 systemd-logind[1803]: Removed session 21. Nov 1 00:44:31.352374 kubelet[2748]: E1101 00:44:31.352288 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6db4456f5f-n6pzz" podUID="472f8f92-5499-46b7-8902-95424bad4337" Nov 1 00:44:31.654974 systemd[1]: Started sshd@21-172.31.19.28:22-147.75.109.163:39808.service. Nov 1 00:44:31.659616 kernel: kauditd_printk_skb: 7 callbacks suppressed Nov 1 00:44:31.659702 kernel: audit: type=1130 audit(1761957871.653:561): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.19.28:22-147.75.109.163:39808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:31.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.19.28:22-147.75.109.163:39808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:31.852980 sshd[5712]: Accepted publickey for core from 147.75.109.163 port 39808 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:44:31.851000 audit[5712]: USER_ACCT pid=5712 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:31.854868 sshd[5712]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:31.864617 kernel: audit: type=1101 audit(1761957871.851:562): pid=5712 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:31.877466 kernel: audit: type=1103 audit(1761957871.852:563): pid=5712 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:31.852000 audit[5712]: CRED_ACQ pid=5712 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:31.884445 kernel: audit: type=1006 audit(1761957871.852:564): pid=5712 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Nov 1 00:44:31.852000 audit[5712]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9589e290 a2=3 a3=0 items=0 ppid=1 pid=5712 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:31.894765 systemd[1]: Started session-22.scope. Nov 1 00:44:31.896384 kernel: audit: type=1300 audit(1761957871.852:564): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9589e290 a2=3 a3=0 items=0 ppid=1 pid=5712 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:31.896250 systemd-logind[1803]: New session 22 of user core. Nov 1 00:44:31.852000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:31.902358 kernel: audit: type=1327 audit(1761957871.852:564): proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:31.906000 audit[5712]: USER_START pid=5712 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:31.921388 kernel: audit: type=1105 audit(1761957871.906:565): pid=5712 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:31.908000 audit[5715]: CRED_ACQ pid=5715 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:31.930373 kernel: audit: type=1103 audit(1761957871.908:566): pid=5715 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:32.290622 sshd[5712]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:32.290000 audit[5712]: USER_END pid=5712 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:32.306429 kernel: audit: type=1106 audit(1761957872.290:567): pid=5712 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:32.307036 systemd-logind[1803]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:44:32.308445 systemd[1]: sshd@21-172.31.19.28:22-147.75.109.163:39808.service: Deactivated successfully. Nov 1 00:44:32.309493 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:44:32.310889 systemd-logind[1803]: Removed session 22. Nov 1 00:44:32.291000 audit[5712]: CRED_DISP pid=5712 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:32.329944 kernel: audit: type=1104 audit(1761957872.291:568): pid=5712 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:32.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.19.28:22-147.75.109.163:39808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:32.358509 kubelet[2748]: E1101 00:44:32.358460 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68cc86985f-qb2p9" podUID="6d3c3149-9beb-44f8-a7ee-d6982872dcbb" Nov 1 00:44:32.362293 kubelet[2748]: E1101 00:44:32.362246 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78c5cf549b-vmbkl" podUID="3c464f2f-5c33-4de3-9c8e-29f197089e35" Nov 1 00:44:33.355885 kubelet[2748]: E1101 00:44:33.355840 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dn6kn" podUID="6cd31c79-d021-4671-b1b1-16d458644a79" Nov 1 00:44:35.351696 kubelet[2748]: E1101 00:44:35.351657 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68cc86985f-ffnk6" podUID="94ef4085-6c01-4795-a191-98e0030c89bd" Nov 1 00:44:36.353464 kubelet[2748]: E1101 00:44:36.353404 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5lqpx" podUID="9a6bbdac-9f73-4cc6-aadc-84424d8082ea" Nov 1 00:44:37.307959 systemd[1]: Started sshd@22-172.31.19.28:22-147.75.109.163:39822.service. Nov 1 00:44:37.320095 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:44:37.321223 kernel: audit: type=1130 audit(1761957877.307:570): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.19.28:22-147.75.109.163:39822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:37.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.19.28:22-147.75.109.163:39822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:37.504000 audit[5725]: USER_ACCT pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:37.515489 kernel: audit: type=1101 audit(1761957877.504:571): pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:37.515568 sshd[5725]: Accepted publickey for core from 147.75.109.163 port 39822 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:44:37.536579 kernel: audit: type=1103 audit(1761957877.513:572): pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:37.513000 audit[5725]: CRED_ACQ pid=5725 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:37.533862 sshd[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:37.546187 systemd[1]: Started session-23.scope. Nov 1 00:44:37.547365 kernel: audit: type=1006 audit(1761957877.524:573): pid=5725 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Nov 1 00:44:37.547549 systemd-logind[1803]: New session 23 of user core. Nov 1 00:44:37.524000 audit[5725]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee589eab0 a2=3 a3=0 items=0 ppid=1 pid=5725 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:37.565514 kernel: audit: type=1300 audit(1761957877.524:573): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee589eab0 a2=3 a3=0 items=0 ppid=1 pid=5725 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:37.524000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:37.571471 kernel: audit: type=1327 audit(1761957877.524:573): proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:37.564000 audit[5725]: USER_START pid=5725 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:37.580487 kernel: audit: type=1105 audit(1761957877.564:574): pid=5725 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:37.565000 audit[5728]: CRED_ACQ pid=5728 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:37.589441 kernel: audit: type=1103 audit(1761957877.565:575): pid=5728 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:37.998786 sshd[5725]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:37.999000 audit[5725]: USER_END pid=5725 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:37.999000 audit[5725]: CRED_DISP pid=5725 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:38.013708 systemd[1]: sshd@22-172.31.19.28:22-147.75.109.163:39822.service: Deactivated successfully. Nov 1 00:44:38.014940 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:44:38.019383 kernel: audit: type=1106 audit(1761957877.999:576): pid=5725 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:38.019528 kernel: audit: type=1104 audit(1761957877.999:577): pid=5725 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:38.020568 systemd-logind[1803]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:44:38.022132 systemd-logind[1803]: Removed session 23. Nov 1 00:44:38.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.19.28:22-147.75.109.163:39822 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:43.024070 systemd[1]: Started sshd@23-172.31.19.28:22-147.75.109.163:45844.service. Nov 1 00:44:43.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.19.28:22-147.75.109.163:45844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:43.028040 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:44:43.028148 kernel: audit: type=1130 audit(1761957883.023:579): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.19.28:22-147.75.109.163:45844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:43.202000 audit[5737]: USER_ACCT pid=5737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:43.203766 sshd[5737]: Accepted publickey for core from 147.75.109.163 port 45844 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:44:43.213481 kernel: audit: type=1101 audit(1761957883.202:580): pid=5737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:43.214037 sshd[5737]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:43.221628 systemd[1]: Started session-24.scope. Nov 1 00:44:43.211000 audit[5737]: CRED_ACQ pid=5737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:43.222689 systemd-logind[1803]: New session 24 of user core. Nov 1 00:44:43.239419 kernel: audit: type=1103 audit(1761957883.211:581): pid=5737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:43.239573 kernel: audit: type=1006 audit(1761957883.211:582): pid=5737 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Nov 1 00:44:43.211000 audit[5737]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffbb48e590 a2=3 a3=0 items=0 ppid=1 pid=5737 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:43.250384 kernel: audit: type=1300 audit(1761957883.211:582): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffbb48e590 a2=3 a3=0 items=0 ppid=1 pid=5737 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:43.211000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:43.254925 kernel: audit: type=1327 audit(1761957883.211:582): proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:43.259000 audit[5737]: USER_START pid=5737 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:43.261000 audit[5740]: CRED_ACQ pid=5740 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:43.284154 kernel: audit: type=1105 audit(1761957883.259:583): pid=5737 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:43.284301 kernel: audit: type=1103 audit(1761957883.261:584): pid=5740 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:43.361127 kubelet[2748]: E1101 00:44:43.361085 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68cc86985f-qb2p9" podUID="6d3c3149-9beb-44f8-a7ee-d6982872dcbb" Nov 1 00:44:43.368474 kubelet[2748]: E1101 00:44:43.366310 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6db4456f5f-n6pzz" podUID="472f8f92-5499-46b7-8902-95424bad4337" Nov 1 00:44:43.688202 sshd[5737]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:43.688000 audit[5737]: USER_END pid=5737 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:43.698376 kernel: audit: type=1106 audit(1761957883.688:585): pid=5737 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:43.701551 systemd-logind[1803]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:44:43.703223 systemd[1]: sshd@23-172.31.19.28:22-147.75.109.163:45844.service: Deactivated successfully. Nov 1 00:44:43.704395 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:44:43.706044 systemd-logind[1803]: Removed session 24. Nov 1 00:44:43.697000 audit[5737]: CRED_DISP pid=5737 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:43.716367 kernel: audit: type=1104 audit(1761957883.697:586): pid=5737 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:43.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.19.28:22-147.75.109.163:45844 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:45.361191 kubelet[2748]: E1101 00:44:45.361145 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dn6kn" podUID="6cd31c79-d021-4671-b1b1-16d458644a79" Nov 1 00:44:46.351166 kubelet[2748]: E1101 00:44:46.351114 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78c5cf549b-vmbkl" podUID="3c464f2f-5c33-4de3-9c8e-29f197089e35" Nov 1 00:44:48.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.19.28:22-147.75.109.163:45852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:48.711381 systemd[1]: Started sshd@24-172.31.19.28:22-147.75.109.163:45852.service. Nov 1 00:44:48.713092 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:44:48.713153 kernel: audit: type=1130 audit(1761957888.710:588): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.19.28:22-147.75.109.163:45852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:48.916808 sshd[5770]: Accepted publickey for core from 147.75.109.163 port 45852 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:44:48.915000 audit[5770]: USER_ACCT pid=5770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:48.925369 kernel: audit: type=1101 audit(1761957888.915:589): pid=5770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:48.925484 kernel: audit: type=1103 audit(1761957888.916:590): pid=5770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:48.916000 audit[5770]: CRED_ACQ pid=5770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:48.936026 kernel: audit: type=1006 audit(1761957888.916:591): pid=5770 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Nov 1 00:44:48.916000 audit[5770]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe61952e0 a2=3 a3=0 items=0 ppid=1 pid=5770 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:48.943121 kernel: audit: type=1300 audit(1761957888.916:591): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe61952e0 a2=3 a3=0 items=0 ppid=1 pid=5770 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:48.943459 sshd[5770]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:48.916000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:48.947380 kernel: audit: type=1327 audit(1761957888.916:591): proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:48.952575 systemd[1]: Started session-25.scope. Nov 1 00:44:48.953576 systemd-logind[1803]: New session 25 of user core. Nov 1 00:44:48.965000 audit[5770]: USER_START pid=5770 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:48.981588 kernel: audit: type=1105 audit(1761957888.965:592): pid=5770 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:48.981754 kernel: audit: type=1103 audit(1761957888.975:593): pid=5773 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:48.975000 audit[5773]: CRED_ACQ pid=5773 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:49.371150 sshd[5770]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:49.371000 audit[5770]: USER_END pid=5770 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:49.384400 kernel: audit: type=1106 audit(1761957889.371:594): pid=5770 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:49.386980 systemd-logind[1803]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:44:49.388703 systemd[1]: sshd@24-172.31.19.28:22-147.75.109.163:45852.service: Deactivated successfully. Nov 1 00:44:49.389844 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:44:49.391580 systemd-logind[1803]: Removed session 25. Nov 1 00:44:49.383000 audit[5770]: CRED_DISP pid=5770 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:49.408367 kernel: audit: type=1104 audit(1761957889.383:595): pid=5770 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:49.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.19.28:22-147.75.109.163:45852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:50.354676 kubelet[2748]: E1101 00:44:50.354631 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68cc86985f-ffnk6" podUID="94ef4085-6c01-4795-a191-98e0030c89bd" Nov 1 00:44:51.356652 kubelet[2748]: E1101 00:44:51.356590 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5lqpx" podUID="9a6bbdac-9f73-4cc6-aadc-84424d8082ea" Nov 1 00:44:54.397199 systemd[1]: Started sshd@25-172.31.19.28:22-147.75.109.163:50132.service. Nov 1 00:44:54.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.19.28:22-147.75.109.163:50132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:54.400250 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 00:44:54.400400 kernel: audit: type=1130 audit(1761957894.396:597): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.19.28:22-147.75.109.163:50132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:54.595186 sshd[5786]: Accepted publickey for core from 147.75.109.163 port 50132 ssh2: RSA SHA256:pqbS4gO8wU1hfumMiqbicBJuOTPdWzHbSvYVadSA6Zw Nov 1 00:44:54.595934 sshd[5786]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:44:54.592000 audit[5786]: USER_ACCT pid=5786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:54.607611 kernel: audit: type=1101 audit(1761957894.592:598): pid=5786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:54.614818 systemd[1]: Started session-26.scope. Nov 1 00:44:54.616422 systemd-logind[1803]: New session 26 of user core. Nov 1 00:44:54.594000 audit[5786]: CRED_ACQ pid=5786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:54.631385 kernel: audit: type=1103 audit(1761957894.594:599): pid=5786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:54.657720 kernel: audit: type=1006 audit(1761957894.594:600): pid=5786 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Nov 1 00:44:54.657895 kernel: audit: type=1300 audit(1761957894.594:600): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff6b870930 a2=3 a3=0 items=0 ppid=1 pid=5786 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:54.594000 audit[5786]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff6b870930 a2=3 a3=0 items=0 ppid=1 pid=5786 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:44:54.594000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:54.662746 kernel: audit: type=1327 audit(1761957894.594:600): proctitle=737368643A20636F7265205B707269765D Nov 1 00:44:54.634000 audit[5786]: USER_START pid=5786 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:54.674159 kernel: audit: type=1105 audit(1761957894.634:601): pid=5786 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:54.637000 audit[5789]: CRED_ACQ pid=5789 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:54.684368 kernel: audit: type=1103 audit(1761957894.637:602): pid=5789 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:54.935556 sshd[5786]: pam_unix(sshd:session): session closed for user core Nov 1 00:44:54.936000 audit[5786]: USER_END pid=5786 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:54.947365 kernel: audit: type=1106 audit(1761957894.936:603): pid=5786 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:54.947920 systemd-logind[1803]: Session 26 logged out. Waiting for processes to exit. Nov 1 00:44:54.948719 systemd[1]: sshd@25-172.31.19.28:22-147.75.109.163:50132.service: Deactivated successfully. Nov 1 00:44:54.949523 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 00:44:54.950711 systemd-logind[1803]: Removed session 26. Nov 1 00:44:54.936000 audit[5786]: CRED_DISP pid=5786 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:54.962357 kernel: audit: type=1104 audit(1761957894.936:604): pid=5786 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Nov 1 00:44:54.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.19.28:22-147.75.109.163:50132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:44:56.351640 kubelet[2748]: E1101 00:44:56.351589 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68cc86985f-qb2p9" podUID="6d3c3149-9beb-44f8-a7ee-d6982872dcbb" Nov 1 00:44:56.353256 kubelet[2748]: E1101 00:44:56.352768 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dn6kn" podUID="6cd31c79-d021-4671-b1b1-16d458644a79" Nov 1 00:44:57.353048 kubelet[2748]: E1101 00:44:57.353010 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6db4456f5f-n6pzz" podUID="472f8f92-5499-46b7-8902-95424bad4337" Nov 1 00:45:01.361064 env[1822]: time="2025-11-01T00:45:01.360710171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:45:01.616985 env[1822]: time="2025-11-01T00:45:01.616679487Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:45:01.619951 env[1822]: time="2025-11-01T00:45:01.619782255Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:45:01.633666 kubelet[2748]: E1101 00:45:01.633422 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:45:01.641960 kubelet[2748]: E1101 00:45:01.641883 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:45:01.642150 kubelet[2748]: E1101 00:45:01.642095 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b7379b16cfad4b00a1e9214c9508a19a,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8vkgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78c5cf549b-vmbkl_calico-system(3c464f2f-5c33-4de3-9c8e-29f197089e35): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:45:01.644701 env[1822]: time="2025-11-01T00:45:01.644661015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:45:01.934876 env[1822]: time="2025-11-01T00:45:01.934527825Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:45:01.937792 env[1822]: time="2025-11-01T00:45:01.937577760Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:45:01.940736 kubelet[2748]: E1101 00:45:01.940683 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:45:01.941016 kubelet[2748]: E1101 00:45:01.940985 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:45:01.942557 kubelet[2748]: E1101 00:45:01.942496 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8vkgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78c5cf549b-vmbkl_calico-system(3c464f2f-5c33-4de3-9c8e-29f197089e35): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:45:01.944988 kubelet[2748]: E1101 00:45:01.944934 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78c5cf549b-vmbkl" podUID="3c464f2f-5c33-4de3-9c8e-29f197089e35" Nov 1 00:45:04.350376 env[1822]: time="2025-11-01T00:45:04.349925025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:45:04.613186 env[1822]: time="2025-11-01T00:45:04.613068157Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:45:04.617211 env[1822]: time="2025-11-01T00:45:04.617127648Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:45:04.617732 kubelet[2748]: E1101 00:45:04.617691 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:45:04.620619 kubelet[2748]: E1101 00:45:04.620561 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:45:04.620992 kubelet[2748]: E1101 00:45:04.620937 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-st25d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68cc86985f-ffnk6_calico-apiserver(94ef4085-6c01-4795-a191-98e0030c89bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:45:04.622903 kubelet[2748]: E1101 00:45:04.622864 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68cc86985f-ffnk6" podUID="94ef4085-6c01-4795-a191-98e0030c89bd" Nov 1 00:45:06.349956 env[1822]: time="2025-11-01T00:45:06.349672351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:45:06.605882 env[1822]: time="2025-11-01T00:45:06.604357552Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:45:06.606858 env[1822]: time="2025-11-01T00:45:06.606709137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:45:06.607126 kubelet[2748]: E1101 00:45:06.607097 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:45:06.607511 kubelet[2748]: E1101 00:45:06.607493 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:45:06.607736 kubelet[2748]: E1101 00:45:06.607690 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4kdwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5lqpx_calico-system(9a6bbdac-9f73-4cc6-aadc-84424d8082ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:45:06.610586 env[1822]: time="2025-11-01T00:45:06.610221185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:45:06.847068 env[1822]: time="2025-11-01T00:45:06.846996026Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:45:06.849503 env[1822]: time="2025-11-01T00:45:06.849416296Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:45:06.849975 kubelet[2748]: E1101 00:45:06.849930 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:45:06.850174 kubelet[2748]: E1101 00:45:06.850148 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:45:06.850854 kubelet[2748]: E1101 00:45:06.850786 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4kdwn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5lqpx_calico-system(9a6bbdac-9f73-4cc6-aadc-84424d8082ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:45:06.852260 kubelet[2748]: E1101 00:45:06.852206 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5lqpx" podUID="9a6bbdac-9f73-4cc6-aadc-84424d8082ea" Nov 1 00:45:08.350258 env[1822]: time="2025-11-01T00:45:08.350217477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:45:08.606788 env[1822]: time="2025-11-01T00:45:08.606629886Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:45:08.608876 env[1822]: time="2025-11-01T00:45:08.608782214Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:45:08.609222 kubelet[2748]: E1101 00:45:08.609171 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:45:08.610159 kubelet[2748]: E1101 00:45:08.609243 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:45:08.610159 kubelet[2748]: E1101 00:45:08.609917 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ghsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-68cc86985f-qb2p9_calico-apiserver(6d3c3149-9beb-44f8-a7ee-d6982872dcbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:45:08.611387 kubelet[2748]: E1101 00:45:08.611315 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68cc86985f-qb2p9" podUID="6d3c3149-9beb-44f8-a7ee-d6982872dcbb" Nov 1 00:45:10.350183 env[1822]: time="2025-11-01T00:45:10.350144204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:45:10.604201 env[1822]: time="2025-11-01T00:45:10.604055371Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:45:10.606391 env[1822]: time="2025-11-01T00:45:10.606321519Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:45:10.606853 kubelet[2748]: E1101 00:45:10.606792 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:45:10.606853 kubelet[2748]: E1101 00:45:10.606852 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:45:10.607242 kubelet[2748]: E1101 00:45:10.606993 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wplvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dn6kn_calico-system(6cd31c79-d021-4671-b1b1-16d458644a79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:45:10.608265 kubelet[2748]: E1101 00:45:10.608212 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dn6kn" podUID="6cd31c79-d021-4671-b1b1-16d458644a79" Nov 1 00:45:11.350162 env[1822]: time="2025-11-01T00:45:11.350123690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:45:11.579757 env[1822]: time="2025-11-01T00:45:11.579695419Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:45:11.581999 env[1822]: time="2025-11-01T00:45:11.581906705Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:45:11.582166 kubelet[2748]: E1101 00:45:11.582132 2748 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:45:11.582265 kubelet[2748]: E1101 00:45:11.582180 2748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:45:11.582367 kubelet[2748]: E1101 00:45:11.582313 2748 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vjd2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6db4456f5f-n6pzz_calico-system(472f8f92-5499-46b7-8902-95424bad4337): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:45:11.583537 kubelet[2748]: E1101 00:45:11.583500 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6db4456f5f-n6pzz" podUID="472f8f92-5499-46b7-8902-95424bad4337" Nov 1 00:45:12.349489 kubelet[2748]: E1101 00:45:12.349450 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78c5cf549b-vmbkl" podUID="3c464f2f-5c33-4de3-9c8e-29f197089e35" Nov 1 00:45:16.665615 systemd[1]: run-containerd-runc-k8s.io-8af868a050fed176d64ca5615531f2663c1dc93bce5246a2d030017002f6145e-runc.MXuLdf.mount: Deactivated successfully. Nov 1 00:45:18.349187 kubelet[2748]: E1101 00:45:18.349136 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68cc86985f-ffnk6" podUID="94ef4085-6c01-4795-a191-98e0030c89bd" Nov 1 00:45:20.350184 kubelet[2748]: E1101 00:45:20.350121 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5lqpx" podUID="9a6bbdac-9f73-4cc6-aadc-84424d8082ea" Nov 1 00:45:20.446332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10556918de567080a9089ee0c26ef85a30dedd7578caea1c48f2585b1b05fcde-rootfs.mount: Deactivated successfully. Nov 1 00:45:20.462258 env[1822]: time="2025-11-01T00:45:20.462191417Z" level=info msg="shim disconnected" id=10556918de567080a9089ee0c26ef85a30dedd7578caea1c48f2585b1b05fcde Nov 1 00:45:20.462258 env[1822]: time="2025-11-01T00:45:20.462245463Z" level=warning msg="cleaning up after shim disconnected" id=10556918de567080a9089ee0c26ef85a30dedd7578caea1c48f2585b1b05fcde namespace=k8s.io Nov 1 00:45:20.462258 env[1822]: time="2025-11-01T00:45:20.462261211Z" level=info msg="cleaning up dead shim" Nov 1 00:45:20.471668 env[1822]: time="2025-11-01T00:45:20.471545525Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:45:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5865 runtime=io.containerd.runc.v2\n" Nov 1 00:45:20.638855 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-957fccbf7989e7e241a2f7e298f3ced59afae2b711abe6d0ff09433f5c8280c2-rootfs.mount: Deactivated successfully. Nov 1 00:45:20.655760 env[1822]: time="2025-11-01T00:45:20.655688745Z" level=info msg="shim disconnected" id=957fccbf7989e7e241a2f7e298f3ced59afae2b711abe6d0ff09433f5c8280c2 Nov 1 00:45:20.655760 env[1822]: time="2025-11-01T00:45:20.655751646Z" level=warning msg="cleaning up after shim disconnected" id=957fccbf7989e7e241a2f7e298f3ced59afae2b711abe6d0ff09433f5c8280c2 namespace=k8s.io Nov 1 00:45:20.655760 env[1822]: time="2025-11-01T00:45:20.655764089Z" level=info msg="cleaning up dead shim" Nov 1 00:45:20.665759 env[1822]: time="2025-11-01T00:45:20.665713942Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:45:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5892 runtime=io.containerd.runc.v2\n" Nov 1 00:45:21.215280 kubelet[2748]: I1101 00:45:21.215238 2748 scope.go:117] "RemoveContainer" containerID="10556918de567080a9089ee0c26ef85a30dedd7578caea1c48f2585b1b05fcde" Nov 1 00:45:21.215551 kubelet[2748]: I1101 00:45:21.215364 2748 scope.go:117] "RemoveContainer" containerID="957fccbf7989e7e241a2f7e298f3ced59afae2b711abe6d0ff09433f5c8280c2" Nov 1 00:45:21.251944 env[1822]: time="2025-11-01T00:45:21.251782167Z" level=info msg="CreateContainer within sandbox \"702d13ff3fda0ec2f8fde99e6c07478e012c2b011e358dc7f0de1609c9a6184a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 1 00:45:21.251944 env[1822]: time="2025-11-01T00:45:21.251838111Z" level=info msg="CreateContainer within sandbox \"21f6501d782afded9ec5b95925aac37fd2c21af3296e97891250b9ad6fd5e25a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 1 00:45:21.293044 env[1822]: time="2025-11-01T00:45:21.292978797Z" level=info msg="CreateContainer within sandbox \"702d13ff3fda0ec2f8fde99e6c07478e012c2b011e358dc7f0de1609c9a6184a\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"398bcebb1dbe3f63060f7cae08355f221e6319f9c9b1328a769d822f5ed03fd2\"" Nov 1 00:45:21.293523 env[1822]: time="2025-11-01T00:45:21.293497888Z" level=info msg="StartContainer for \"398bcebb1dbe3f63060f7cae08355f221e6319f9c9b1328a769d822f5ed03fd2\"" Nov 1 00:45:21.303464 env[1822]: time="2025-11-01T00:45:21.303406193Z" level=info msg="CreateContainer within sandbox \"21f6501d782afded9ec5b95925aac37fd2c21af3296e97891250b9ad6fd5e25a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8cf2638c2542c95e0486b42edc08c052daef16310cc43533aa78efad67cb8e6a\"" Nov 1 00:45:21.304040 env[1822]: time="2025-11-01T00:45:21.304001480Z" level=info msg="StartContainer for \"8cf2638c2542c95e0486b42edc08c052daef16310cc43533aa78efad67cb8e6a\"" Nov 1 00:45:21.438095 env[1822]: time="2025-11-01T00:45:21.438031164Z" level=info msg="StartContainer for \"398bcebb1dbe3f63060f7cae08355f221e6319f9c9b1328a769d822f5ed03fd2\" returns successfully" Nov 1 00:45:21.448824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4012308337.mount: Deactivated successfully. Nov 1 00:45:21.475775 env[1822]: time="2025-11-01T00:45:21.475264876Z" level=info msg="StartContainer for \"8cf2638c2542c95e0486b42edc08c052daef16310cc43533aa78efad67cb8e6a\" returns successfully" Nov 1 00:45:22.349443 kubelet[2748]: E1101 00:45:22.349397 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dn6kn" podUID="6cd31c79-d021-4671-b1b1-16d458644a79" Nov 1 00:45:23.350031 kubelet[2748]: E1101 00:45:23.349997 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6db4456f5f-n6pzz" podUID="472f8f92-5499-46b7-8902-95424bad4337" Nov 1 00:45:23.350793 kubelet[2748]: E1101 00:45:23.350767 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-68cc86985f-qb2p9" podUID="6d3c3149-9beb-44f8-a7ee-d6982872dcbb" Nov 1 00:45:25.647422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52d27442d56927ece45b19987119e768a5187f631d664222f598e1ce9864d503-rootfs.mount: Deactivated successfully. Nov 1 00:45:25.673064 env[1822]: time="2025-11-01T00:45:25.672910495Z" level=info msg="shim disconnected" id=52d27442d56927ece45b19987119e768a5187f631d664222f598e1ce9864d503 Nov 1 00:45:25.673064 env[1822]: time="2025-11-01T00:45:25.672959220Z" level=warning msg="cleaning up after shim disconnected" id=52d27442d56927ece45b19987119e768a5187f631d664222f598e1ce9864d503 namespace=k8s.io Nov 1 00:45:25.673064 env[1822]: time="2025-11-01T00:45:25.672968948Z" level=info msg="cleaning up dead shim" Nov 1 00:45:25.681590 env[1822]: time="2025-11-01T00:45:25.681544895Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:45:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5992 runtime=io.containerd.runc.v2\n" Nov 1 00:45:26.236533 kubelet[2748]: I1101 00:45:26.236498 2748 scope.go:117] "RemoveContainer" containerID="52d27442d56927ece45b19987119e768a5187f631d664222f598e1ce9864d503" Nov 1 00:45:26.239204 env[1822]: time="2025-11-01T00:45:26.239149819Z" level=info msg="CreateContainer within sandbox \"2206219efcb422674ceb454b278a7b90d50add3c05ede8f7b383da6a1bc9da12\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 1 00:45:26.259467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983767030.mount: Deactivated successfully. Nov 1 00:45:26.268722 env[1822]: time="2025-11-01T00:45:26.268646013Z" level=info msg="CreateContainer within sandbox \"2206219efcb422674ceb454b278a7b90d50add3c05ede8f7b383da6a1bc9da12\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2ec318e7a021a4f29222af8b7647bee18367d49636ad6f2346a96ec3bf932676\"" Nov 1 00:45:26.269243 env[1822]: time="2025-11-01T00:45:26.269200706Z" level=info msg="StartContainer for \"2ec318e7a021a4f29222af8b7647bee18367d49636ad6f2346a96ec3bf932676\"" Nov 1 00:45:26.326490 kubelet[2748]: E1101 00:45:26.326018 2748 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-28?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 1 00:45:26.359479 env[1822]: time="2025-11-01T00:45:26.359422234Z" level=info msg="StartContainer for \"2ec318e7a021a4f29222af8b7647bee18367d49636ad6f2346a96ec3bf932676\" returns successfully" Nov 1 00:45:27.349370 kubelet[2748]: E1101 00:45:27.349315 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78c5cf549b-vmbkl" podUID="3c464f2f-5c33-4de3-9c8e-29f197089e35" Nov 1 00:45:31.351864 kubelet[2748]: E1101 00:45:31.351816 2748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5lqpx" podUID="9a6bbdac-9f73-4cc6-aadc-84424d8082ea"