Nov 1 04:16:43.917763 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Oct 31 23:02:53 -00 2025 Nov 1 04:16:43.917789 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 04:16:43.917802 kernel: BIOS-provided physical RAM map: Nov 1 04:16:43.917809 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 1 04:16:43.917816 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 1 04:16:43.917823 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 04:16:43.917832 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Nov 1 04:16:43.917839 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Nov 1 04:16:43.917846 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 1 04:16:43.917853 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 1 04:16:43.917863 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 04:16:43.917870 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 04:16:43.917877 kernel: NX (Execute Disable) protection: active Nov 1 04:16:43.917885 kernel: SMBIOS 2.8 present. Nov 1 04:16:43.917894 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Nov 1 04:16:43.917902 kernel: Hypervisor detected: KVM Nov 1 04:16:43.917912 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 04:16:43.917920 kernel: kvm-clock: cpu 0, msr 741a0001, primary cpu clock Nov 1 04:16:43.917928 kernel: kvm-clock: using sched offset of 4182141101 cycles Nov 1 04:16:43.917936 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 04:16:43.917945 kernel: tsc: Detected 2294.576 MHz processor Nov 1 04:16:43.917953 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 04:16:43.917961 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 04:16:43.917969 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Nov 1 04:16:43.917977 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 04:16:43.917987 kernel: Using GB pages for direct mapping Nov 1 04:16:43.917995 kernel: ACPI: Early table checksum verification disabled Nov 1 04:16:43.918003 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Nov 1 04:16:43.918011 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 04:16:43.918019 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 04:16:43.918027 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 04:16:43.918035 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Nov 1 04:16:43.918043 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 04:16:43.918051 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 04:16:43.918061 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 04:16:43.918068 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 04:16:43.918076 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Nov 1 04:16:43.918084 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Nov 1 04:16:43.918092 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Nov 1 04:16:43.918100 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Nov 1 04:16:43.918112 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Nov 1 04:16:43.918123 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Nov 1 04:16:43.918132 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Nov 1 04:16:43.918140 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 04:16:43.918149 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 04:16:43.918157 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Nov 1 04:16:43.918166 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Nov 1 04:16:43.918174 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Nov 1 04:16:43.918185 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Nov 1 04:16:43.918193 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Nov 1 04:16:43.918201 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Nov 1 04:16:43.918210 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Nov 1 04:16:43.918218 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Nov 1 04:16:43.918227 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Nov 1 04:16:43.918235 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Nov 1 04:16:43.918244 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Nov 1 04:16:43.918252 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Nov 1 04:16:43.918261 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Nov 1 04:16:43.918271 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Nov 1 04:16:43.918280 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 1 04:16:43.918288 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 1 04:16:43.918297 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Nov 1 04:16:43.918306 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Nov 1 04:16:43.918314 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Nov 1 04:16:43.918323 kernel: Zone ranges: Nov 1 04:16:43.918332 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 04:16:43.918340 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Nov 1 04:16:43.918351 kernel: Normal empty Nov 1 04:16:43.918359 kernel: Movable zone start for each node Nov 1 04:16:43.918368 kernel: Early memory node ranges Nov 1 04:16:43.918377 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 04:16:43.918385 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Nov 1 04:16:43.918394 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Nov 1 04:16:43.918402 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 04:16:43.918411 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 04:16:43.918419 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Nov 1 04:16:43.918430 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 04:16:43.918439 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 04:16:43.918447 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 04:16:43.918456 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 04:16:43.918464 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 04:16:43.918473 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 04:16:43.918482 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 04:16:43.918490 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 04:16:43.918499 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 04:16:43.918509 kernel: TSC deadline timer available Nov 1 04:16:43.918518 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Nov 1 04:16:43.918527 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 1 04:16:43.918535 kernel: Booting paravirtualized kernel on KVM Nov 1 04:16:43.918544 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 04:16:43.918553 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Nov 1 04:16:43.920607 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Nov 1 04:16:43.920617 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Nov 1 04:16:43.920625 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 1 04:16:43.920639 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Nov 1 04:16:43.920647 kernel: kvm-guest: PV spinlocks enabled Nov 1 04:16:43.920656 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 04:16:43.920665 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Nov 1 04:16:43.920673 kernel: Policy zone: DMA32 Nov 1 04:16:43.920684 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 04:16:43.920693 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 04:16:43.920702 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 04:16:43.920713 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 04:16:43.920722 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 04:16:43.920731 kernel: Memory: 1903832K/2096616K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 192524K reserved, 0K cma-reserved) Nov 1 04:16:43.920739 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 1 04:16:43.920748 kernel: ftrace: allocating 34614 entries in 136 pages Nov 1 04:16:43.920757 kernel: ftrace: allocated 136 pages with 2 groups Nov 1 04:16:43.920765 kernel: rcu: Hierarchical RCU implementation. Nov 1 04:16:43.920774 kernel: rcu: RCU event tracing is enabled. Nov 1 04:16:43.920783 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 1 04:16:43.920794 kernel: Rude variant of Tasks RCU enabled. Nov 1 04:16:43.920803 kernel: Tracing variant of Tasks RCU enabled. Nov 1 04:16:43.920812 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 04:16:43.920821 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 1 04:16:43.920830 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Nov 1 04:16:43.920838 kernel: random: crng init done Nov 1 04:16:43.920847 kernel: Console: colour VGA+ 80x25 Nov 1 04:16:43.920866 kernel: printk: console [tty0] enabled Nov 1 04:16:43.920875 kernel: printk: console [ttyS0] enabled Nov 1 04:16:43.920885 kernel: ACPI: Core revision 20210730 Nov 1 04:16:43.920894 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 04:16:43.920903 kernel: x2apic enabled Nov 1 04:16:43.920914 kernel: Switched APIC routing to physical x2apic. Nov 1 04:16:43.920924 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2113312ac93, max_idle_ns: 440795244843 ns Nov 1 04:16:43.920933 kernel: Calibrating delay loop (skipped) preset value.. 4589.15 BogoMIPS (lpj=2294576) Nov 1 04:16:43.920942 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 04:16:43.920952 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 1 04:16:43.920963 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 1 04:16:43.920972 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 04:16:43.920985 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Nov 1 04:16:43.921008 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 1 04:16:43.921018 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 1 04:16:43.921027 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Nov 1 04:16:43.921036 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Nov 1 04:16:43.921045 kernel: RETBleed: Mitigation: Enhanced IBRS Nov 1 04:16:43.921054 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 04:16:43.921064 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Nov 1 04:16:43.921075 kernel: TAA: Mitigation: Clear CPU buffers Nov 1 04:16:43.921084 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 04:16:43.921094 kernel: GDS: Unknown: Dependent on hypervisor status Nov 1 04:16:43.921103 kernel: active return thunk: its_return_thunk Nov 1 04:16:43.921112 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 04:16:43.921121 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 04:16:43.921130 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 04:16:43.921139 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 04:16:43.921148 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 1 04:16:43.921157 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 1 04:16:43.921166 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 1 04:16:43.921182 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 1 04:16:43.921195 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 04:16:43.921204 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 1 04:16:43.921214 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 1 04:16:43.921223 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 1 04:16:43.921232 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Nov 1 04:16:43.921241 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Nov 1 04:16:43.921250 kernel: Freeing SMP alternatives memory: 32K Nov 1 04:16:43.921259 kernel: pid_max: default: 32768 minimum: 301 Nov 1 04:16:43.921268 kernel: LSM: Security Framework initializing Nov 1 04:16:43.921277 kernel: SELinux: Initializing. Nov 1 04:16:43.921287 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 04:16:43.921299 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 04:16:43.921308 kernel: smpboot: CPU0: Intel Xeon Processor (Cascadelake) (family: 0x6, model: 0x55, stepping: 0x6) Nov 1 04:16:43.921317 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 1 04:16:43.921326 kernel: signal: max sigframe size: 3632 Nov 1 04:16:43.921335 kernel: rcu: Hierarchical SRCU implementation. Nov 1 04:16:43.921345 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 04:16:43.921354 kernel: smp: Bringing up secondary CPUs ... Nov 1 04:16:43.921364 kernel: x86: Booting SMP configuration: Nov 1 04:16:43.921373 kernel: .... node #0, CPUs: #1 Nov 1 04:16:43.921382 kernel: kvm-clock: cpu 1, msr 741a0041, secondary cpu clock Nov 1 04:16:43.921393 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Nov 1 04:16:43.921402 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Nov 1 04:16:43.921411 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 04:16:43.921421 kernel: smpboot: Max logical packages: 16 Nov 1 04:16:43.921430 kernel: smpboot: Total of 2 processors activated (9178.30 BogoMIPS) Nov 1 04:16:43.921439 kernel: devtmpfs: initialized Nov 1 04:16:43.921449 kernel: x86/mm: Memory block size: 128MB Nov 1 04:16:43.921458 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 04:16:43.921467 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 1 04:16:43.921479 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 04:16:43.921488 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 04:16:43.921497 kernel: audit: initializing netlink subsys (disabled) Nov 1 04:16:43.921506 kernel: audit: type=2000 audit(1761970603.520:1): state=initialized audit_enabled=0 res=1 Nov 1 04:16:43.921515 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 04:16:43.921524 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 04:16:43.921534 kernel: cpuidle: using governor menu Nov 1 04:16:43.921543 kernel: ACPI: bus type PCI registered Nov 1 04:16:43.921552 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 04:16:43.921581 kernel: dca service started, version 1.12.1 Nov 1 04:16:43.921591 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 1 04:16:43.921600 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Nov 1 04:16:43.921609 kernel: PCI: Using configuration type 1 for base access Nov 1 04:16:43.921618 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 04:16:43.921627 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 04:16:43.921637 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 04:16:43.921646 kernel: ACPI: Added _OSI(Module Device) Nov 1 04:16:43.921655 kernel: ACPI: Added _OSI(Processor Device) Nov 1 04:16:43.921666 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 04:16:43.921676 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 04:16:43.921685 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 04:16:43.921694 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 04:16:43.921703 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 04:16:43.921713 kernel: ACPI: Interpreter enabled Nov 1 04:16:43.921722 kernel: ACPI: PM: (supports S0 S5) Nov 1 04:16:43.921732 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 04:16:43.921741 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 04:16:43.921752 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 04:16:43.921761 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 04:16:43.921913 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 04:16:43.922007 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 1 04:16:43.922093 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 1 04:16:43.922105 kernel: PCI host bridge to bus 0000:00 Nov 1 04:16:43.922194 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 04:16:43.922277 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 04:16:43.922354 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 04:16:43.922438 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 1 04:16:43.922518 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 04:16:43.924648 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Nov 1 04:16:43.924733 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 04:16:43.924840 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 1 04:16:43.924947 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Nov 1 04:16:43.925040 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Nov 1 04:16:43.925133 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Nov 1 04:16:43.925219 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Nov 1 04:16:43.925306 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 04:16:43.925401 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Nov 1 04:16:43.925493 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Nov 1 04:16:43.925637 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Nov 1 04:16:43.925727 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Nov 1 04:16:43.925821 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Nov 1 04:16:43.925908 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Nov 1 04:16:43.926001 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Nov 1 04:16:43.926092 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Nov 1 04:16:43.926184 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Nov 1 04:16:43.926271 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Nov 1 04:16:43.926364 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Nov 1 04:16:43.926451 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Nov 1 04:16:43.926582 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Nov 1 04:16:43.926675 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Nov 1 04:16:43.926776 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Nov 1 04:16:43.926864 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Nov 1 04:16:43.926955 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 1 04:16:43.927042 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 1 04:16:43.927128 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Nov 1 04:16:43.927213 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Nov 1 04:16:43.927301 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Nov 1 04:16:43.927399 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Nov 1 04:16:43.927487 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Nov 1 04:16:43.929607 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Nov 1 04:16:43.929704 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Nov 1 04:16:43.929802 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 1 04:16:43.929890 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 04:16:43.929989 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 1 04:16:43.930076 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Nov 1 04:16:43.930163 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Nov 1 04:16:43.930257 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 1 04:16:43.930344 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 1 04:16:43.930442 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Nov 1 04:16:43.930537 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Nov 1 04:16:43.932679 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 1 04:16:43.932772 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 1 04:16:43.932860 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 04:16:43.932958 kernel: pci_bus 0000:02: extended config space not accessible Nov 1 04:16:43.933063 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Nov 1 04:16:43.933163 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Nov 1 04:16:43.933255 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 1 04:16:43.933345 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 1 04:16:43.933441 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Nov 1 04:16:43.933532 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Nov 1 04:16:43.933654 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 1 04:16:43.933741 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 1 04:16:43.933832 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 04:16:43.933933 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Nov 1 04:16:43.934023 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Nov 1 04:16:43.934111 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 1 04:16:43.934196 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 1 04:16:43.934282 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 04:16:43.934367 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 1 04:16:43.934454 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 1 04:16:43.934542 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 04:16:43.936681 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 1 04:16:43.936774 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 1 04:16:43.936862 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 04:16:43.936950 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 1 04:16:43.937036 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 1 04:16:43.937121 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 04:16:43.937210 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 1 04:16:43.937300 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 1 04:16:43.937385 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 04:16:43.937473 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 1 04:16:43.937608 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 1 04:16:43.937697 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 04:16:43.937709 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 04:16:43.937719 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 04:16:43.937729 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 04:16:43.937742 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 04:16:43.937751 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 04:16:43.937760 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 04:16:43.937770 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 04:16:43.937779 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 04:16:43.937788 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 04:16:43.937797 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 04:16:43.937806 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 04:16:43.937815 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 04:16:43.937827 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 04:16:43.937836 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 04:16:43.937845 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 04:16:43.937855 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 04:16:43.937864 kernel: iommu: Default domain type: Translated Nov 1 04:16:43.937873 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 04:16:43.937958 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 04:16:43.938044 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 04:16:43.938129 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 04:16:43.938144 kernel: vgaarb: loaded Nov 1 04:16:43.938154 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 04:16:43.938163 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 04:16:43.938173 kernel: PTP clock support registered Nov 1 04:16:43.938182 kernel: PCI: Using ACPI for IRQ routing Nov 1 04:16:43.938191 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 04:16:43.938200 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 1 04:16:43.938209 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Nov 1 04:16:43.938221 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 04:16:43.938230 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 04:16:43.938240 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 04:16:43.938250 kernel: pnp: PnP ACPI init Nov 1 04:16:43.938343 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 1 04:16:43.938356 kernel: pnp: PnP ACPI: found 5 devices Nov 1 04:16:43.938365 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 04:16:43.938375 kernel: NET: Registered PF_INET protocol family Nov 1 04:16:43.938384 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 04:16:43.938396 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 1 04:16:43.938406 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 04:16:43.938415 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 04:16:43.938424 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Nov 1 04:16:43.938434 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 1 04:16:43.938443 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 04:16:43.938453 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 04:16:43.938462 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 04:16:43.938474 kernel: NET: Registered PF_XDP protocol family Nov 1 04:16:43.938574 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Nov 1 04:16:43.938664 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Nov 1 04:16:43.938752 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Nov 1 04:16:43.938839 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Nov 1 04:16:43.938925 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 1 04:16:43.939013 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 1 04:16:43.939103 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 1 04:16:43.939190 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 1 04:16:43.939277 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Nov 1 04:16:43.939363 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Nov 1 04:16:43.939448 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Nov 1 04:16:43.939534 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Nov 1 04:16:43.941685 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Nov 1 04:16:43.941778 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Nov 1 04:16:43.941864 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Nov 1 04:16:43.941950 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Nov 1 04:16:43.942042 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 1 04:16:43.942131 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 1 04:16:43.942218 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 1 04:16:43.942304 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Nov 1 04:16:43.942390 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 1 04:16:43.942479 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 04:16:43.946603 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 1 04:16:43.946702 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Nov 1 04:16:43.946789 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 1 04:16:43.946875 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 04:16:43.946968 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 1 04:16:43.947057 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Nov 1 04:16:43.947142 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 1 04:16:43.947228 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 04:16:43.947313 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 1 04:16:43.947399 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Nov 1 04:16:43.947487 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 1 04:16:43.947586 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 04:16:43.947673 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 1 04:16:43.947759 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Nov 1 04:16:43.947849 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 1 04:16:43.947936 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 04:16:43.948023 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 1 04:16:43.948109 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Nov 1 04:16:43.948194 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 1 04:16:43.948281 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 04:16:43.948366 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 1 04:16:43.948457 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Nov 1 04:16:43.948543 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 1 04:16:43.948644 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 04:16:43.948732 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 1 04:16:43.948820 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Nov 1 04:16:43.948908 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 1 04:16:43.948995 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 04:16:43.949082 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 04:16:43.949161 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 04:16:43.949240 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 04:16:43.949319 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 1 04:16:43.949397 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 1 04:16:43.949475 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Nov 1 04:16:43.949581 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Nov 1 04:16:43.949670 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Nov 1 04:16:43.949754 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 04:16:43.949845 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Nov 1 04:16:43.949936 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Nov 1 04:16:43.950020 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Nov 1 04:16:43.950103 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 04:16:43.950194 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Nov 1 04:16:43.950281 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Nov 1 04:16:43.950363 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 04:16:43.950459 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Nov 1 04:16:43.950544 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Nov 1 04:16:43.950643 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 04:16:43.950733 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Nov 1 04:16:43.950817 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Nov 1 04:16:43.950903 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 04:16:43.951058 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Nov 1 04:16:43.951141 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Nov 1 04:16:43.951227 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 04:16:43.951317 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Nov 1 04:16:43.951400 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Nov 1 04:16:43.951489 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 04:16:43.954633 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Nov 1 04:16:43.954726 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Nov 1 04:16:43.954809 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 04:16:43.954823 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 04:16:43.954834 kernel: PCI: CLS 0 bytes, default 64 Nov 1 04:16:43.954845 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 04:16:43.954855 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Nov 1 04:16:43.954869 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 04:16:43.954880 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2113312ac93, max_idle_ns: 440795244843 ns Nov 1 04:16:43.954890 kernel: Initialise system trusted keyrings Nov 1 04:16:43.954900 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 1 04:16:43.954910 kernel: Key type asymmetric registered Nov 1 04:16:43.954920 kernel: Asymmetric key parser 'x509' registered Nov 1 04:16:43.954930 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 04:16:43.954940 kernel: io scheduler mq-deadline registered Nov 1 04:16:43.954950 kernel: io scheduler kyber registered Nov 1 04:16:43.954962 kernel: io scheduler bfq registered Nov 1 04:16:43.955051 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Nov 1 04:16:43.955143 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Nov 1 04:16:43.955231 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 04:16:43.955322 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Nov 1 04:16:43.955410 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Nov 1 04:16:43.955498 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 04:16:43.955632 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Nov 1 04:16:43.955721 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Nov 1 04:16:43.955808 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 04:16:43.955897 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Nov 1 04:16:43.955983 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Nov 1 04:16:43.956070 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 04:16:43.956163 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Nov 1 04:16:43.956250 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Nov 1 04:16:43.956336 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 04:16:43.956424 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Nov 1 04:16:43.956512 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Nov 1 04:16:43.957731 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 04:16:43.957830 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Nov 1 04:16:43.957917 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Nov 1 04:16:43.958003 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 04:16:43.958089 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Nov 1 04:16:43.958175 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Nov 1 04:16:43.958261 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 04:16:43.958277 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 04:16:43.958288 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 04:16:43.958298 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 1 04:16:43.958308 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 04:16:43.958318 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 04:16:43.958329 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 04:16:43.958339 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 04:16:43.958349 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 04:16:43.958361 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 04:16:43.958449 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 1 04:16:43.958531 kernel: rtc_cmos 00:03: registered as rtc0 Nov 1 04:16:43.959679 kernel: rtc_cmos 00:03: setting system clock to 2025-11-01T04:16:43 UTC (1761970603) Nov 1 04:16:43.959764 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 1 04:16:43.959778 kernel: intel_pstate: CPU model not supported Nov 1 04:16:43.959788 kernel: NET: Registered PF_INET6 protocol family Nov 1 04:16:43.959798 kernel: Segment Routing with IPv6 Nov 1 04:16:43.959812 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 04:16:43.959822 kernel: NET: Registered PF_PACKET protocol family Nov 1 04:16:43.959832 kernel: Key type dns_resolver registered Nov 1 04:16:43.959843 kernel: IPI shorthand broadcast: enabled Nov 1 04:16:43.959853 kernel: sched_clock: Marking stable (736188064, 118587092)->(1062743723, -207968567) Nov 1 04:16:43.959863 kernel: registered taskstats version 1 Nov 1 04:16:43.959876 kernel: Loading compiled-in X.509 certificates Nov 1 04:16:43.959886 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: f2055682e6899ad8548fd369019e7b47939b46a0' Nov 1 04:16:43.959896 kernel: Key type .fscrypt registered Nov 1 04:16:43.959909 kernel: Key type fscrypt-provisioning registered Nov 1 04:16:43.959930 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 04:16:43.959939 kernel: ima: Allocated hash algorithm: sha1 Nov 1 04:16:43.959948 kernel: ima: No architecture policies found Nov 1 04:16:43.959958 kernel: clk: Disabling unused clocks Nov 1 04:16:43.959967 kernel: Freeing unused kernel image (initmem) memory: 47496K Nov 1 04:16:43.959976 kernel: Write protecting the kernel read-only data: 28672k Nov 1 04:16:43.959985 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Nov 1 04:16:43.959995 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Nov 1 04:16:43.960006 kernel: Run /init as init process Nov 1 04:16:43.960015 kernel: with arguments: Nov 1 04:16:43.960024 kernel: /init Nov 1 04:16:43.960033 kernel: with environment: Nov 1 04:16:43.960041 kernel: HOME=/ Nov 1 04:16:43.960050 kernel: TERM=linux Nov 1 04:16:43.960059 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 04:16:43.960071 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 04:16:43.960086 systemd[1]: Detected virtualization kvm. Nov 1 04:16:43.960096 systemd[1]: Detected architecture x86-64. Nov 1 04:16:43.960105 systemd[1]: Running in initrd. Nov 1 04:16:43.960114 systemd[1]: No hostname configured, using default hostname. Nov 1 04:16:43.960124 systemd[1]: Hostname set to . Nov 1 04:16:43.960134 systemd[1]: Initializing machine ID from VM UUID. Nov 1 04:16:43.960143 systemd[1]: Queued start job for default target initrd.target. Nov 1 04:16:43.960153 systemd[1]: Started systemd-ask-password-console.path. Nov 1 04:16:43.960165 systemd[1]: Reached target cryptsetup.target. Nov 1 04:16:43.960174 systemd[1]: Reached target paths.target. Nov 1 04:16:43.960183 systemd[1]: Reached target slices.target. Nov 1 04:16:43.960193 systemd[1]: Reached target swap.target. Nov 1 04:16:43.960203 systemd[1]: Reached target timers.target. Nov 1 04:16:43.960213 systemd[1]: Listening on iscsid.socket. Nov 1 04:16:43.960222 systemd[1]: Listening on iscsiuio.socket. Nov 1 04:16:43.960234 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 04:16:43.960243 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 04:16:43.960253 systemd[1]: Listening on systemd-journald.socket. Nov 1 04:16:43.960278 systemd[1]: Listening on systemd-networkd.socket. Nov 1 04:16:43.960288 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 04:16:43.960299 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 04:16:43.960310 systemd[1]: Reached target sockets.target. Nov 1 04:16:43.960320 systemd[1]: Starting kmod-static-nodes.service... Nov 1 04:16:43.960331 systemd[1]: Finished network-cleanup.service. Nov 1 04:16:43.960343 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 04:16:43.960354 systemd[1]: Starting systemd-journald.service... Nov 1 04:16:43.960364 systemd[1]: Starting systemd-modules-load.service... Nov 1 04:16:43.960374 systemd[1]: Starting systemd-resolved.service... Nov 1 04:16:43.960385 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 04:16:43.960396 systemd[1]: Finished kmod-static-nodes.service. Nov 1 04:16:43.960406 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 04:16:43.960417 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 04:16:43.960427 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 04:16:43.960440 kernel: Bridge firewalling registered Nov 1 04:16:43.960456 systemd-journald[201]: Journal started Nov 1 04:16:43.960514 systemd-journald[201]: Runtime Journal (/run/log/journal/1445c84aa852424abb308b08b3c3cbea) is 4.7M, max 38.1M, 33.3M free. Nov 1 04:16:43.914633 systemd-resolved[203]: Positive Trust Anchors: Nov 1 04:16:43.914646 systemd-resolved[203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 04:16:43.914684 systemd-resolved[203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 04:16:43.971018 systemd[1]: Started systemd-resolved.service. Nov 1 04:16:43.917895 systemd-resolved[203]: Defaulting to hostname 'linux'. Nov 1 04:16:43.975100 systemd[1]: Started systemd-journald.service. Nov 1 04:16:43.975122 kernel: audit: type=1130 audit(1761970603.970:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:43.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:43.921215 systemd-modules-load[202]: Inserted module 'overlay' Nov 1 04:16:43.989065 kernel: audit: type=1130 audit(1761970603.974:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:43.989088 kernel: audit: type=1130 audit(1761970603.978:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:43.989101 kernel: audit: type=1130 audit(1761970603.978:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:43.989114 kernel: SCSI subsystem initialized Nov 1 04:16:43.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:43.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:43.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:43.959519 systemd-modules-load[202]: Inserted module 'br_netfilter' Nov 1 04:16:43.975748 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 04:16:43.978816 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 04:16:43.979268 systemd[1]: Reached target nss-lookup.target. Nov 1 04:16:43.980364 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 04:16:44.000053 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 04:16:44.002907 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 04:16:44.002929 kernel: device-mapper: uevent: version 1.0.3 Nov 1 04:16:44.002942 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 04:16:44.014665 kernel: audit: type=1130 audit(1761970604.004:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:44.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:44.013189 systemd-modules-load[202]: Inserted module 'dm_multipath' Nov 1 04:16:44.015916 systemd[1]: Starting dracut-cmdline.service... Nov 1 04:16:44.018280 systemd[1]: Finished systemd-modules-load.service. Nov 1 04:16:44.028814 dracut-cmdline[221]: dracut-dracut-053 Nov 1 04:16:44.029333 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=c4c72a4f851a6da01cbc7150799371516ef8311ea786098908d8eb164df01ee2 Nov 1 04:16:44.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:44.032461 systemd[1]: Starting systemd-sysctl.service... Nov 1 04:16:44.034768 kernel: audit: type=1130 audit(1761970604.030:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:44.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:44.041331 systemd[1]: Finished systemd-sysctl.service. Nov 1 04:16:44.044833 kernel: audit: type=1130 audit(1761970604.041:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:44.131594 kernel: Loading iSCSI transport class v2.0-870. Nov 1 04:16:44.150604 kernel: iscsi: registered transport (tcp) Nov 1 04:16:44.179003 kernel: iscsi: registered transport (qla4xxx) Nov 1 04:16:44.179071 kernel: QLogic iSCSI HBA Driver Nov 1 04:16:44.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:44.254078 systemd[1]: Finished dracut-cmdline.service. Nov 1 04:16:44.261170 kernel: audit: type=1130 audit(1761970604.253:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:44.255463 systemd[1]: Starting dracut-pre-udev.service... Nov 1 04:16:44.325662 kernel: raid6: avx512x4 gen() 17447 MB/s Nov 1 04:16:44.342635 kernel: raid6: avx512x4 xor() 7867 MB/s Nov 1 04:16:44.359654 kernel: raid6: avx512x2 gen() 17878 MB/s Nov 1 04:16:44.376602 kernel: raid6: avx512x2 xor() 21468 MB/s Nov 1 04:16:44.393653 kernel: raid6: avx512x1 gen() 17862 MB/s Nov 1 04:16:44.410632 kernel: raid6: avx512x1 xor() 19538 MB/s Nov 1 04:16:44.427620 kernel: raid6: avx2x4 gen() 17771 MB/s Nov 1 04:16:44.444636 kernel: raid6: avx2x4 xor() 7169 MB/s Nov 1 04:16:44.461633 kernel: raid6: avx2x2 gen() 17718 MB/s Nov 1 04:16:44.478630 kernel: raid6: avx2x2 xor() 15758 MB/s Nov 1 04:16:44.495628 kernel: raid6: avx2x1 gen() 13816 MB/s Nov 1 04:16:44.512630 kernel: raid6: avx2x1 xor() 13743 MB/s Nov 1 04:16:44.529620 kernel: raid6: sse2x4 gen() 8137 MB/s Nov 1 04:16:44.546633 kernel: raid6: sse2x4 xor() 5267 MB/s Nov 1 04:16:44.563633 kernel: raid6: sse2x2 gen() 8950 MB/s Nov 1 04:16:44.580703 kernel: raid6: sse2x2 xor() 5214 MB/s Nov 1 04:16:44.597643 kernel: raid6: sse2x1 gen() 8201 MB/s Nov 1 04:16:44.615239 kernel: raid6: sse2x1 xor() 4114 MB/s Nov 1 04:16:44.615378 kernel: raid6: using algorithm avx512x2 gen() 17878 MB/s Nov 1 04:16:44.615417 kernel: raid6: .... xor() 21468 MB/s, rmw enabled Nov 1 04:16:44.615960 kernel: raid6: using avx512x2 recovery algorithm Nov 1 04:16:44.635578 kernel: xor: automatically using best checksumming function avx Nov 1 04:16:44.751626 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Nov 1 04:16:44.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:44.768337 systemd[1]: Finished dracut-pre-udev.service. Nov 1 04:16:44.772628 kernel: audit: type=1130 audit(1761970604.768:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:44.772000 audit: BPF prog-id=7 op=LOAD Nov 1 04:16:44.772000 audit: BPF prog-id=8 op=LOAD Nov 1 04:16:44.773251 systemd[1]: Starting systemd-udevd.service... Nov 1 04:16:44.787411 systemd-udevd[401]: Using default interface naming scheme 'v252'. Nov 1 04:16:44.792794 systemd[1]: Started systemd-udevd.service. Nov 1 04:16:44.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:44.798367 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 04:16:44.814331 dracut-pre-trigger[416]: rd.md=0: removing MD RAID activation Nov 1 04:16:44.855079 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 04:16:44.856764 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 04:16:44.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:44.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:44.915530 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 04:16:44.971652 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 1 04:16:45.010958 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 04:16:45.010976 kernel: GPT:17805311 != 125829119 Nov 1 04:16:45.010988 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 04:16:45.011000 kernel: GPT:17805311 != 125829119 Nov 1 04:16:45.011011 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 04:16:45.011023 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 04:16:45.011041 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 04:16:45.020043 kernel: libata version 3.00 loaded. Nov 1 04:16:45.029287 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (449) Nov 1 04:16:45.034889 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 04:16:45.086301 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 04:16:45.086330 kernel: AES CTR mode by8 optimization enabled Nov 1 04:16:45.086350 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 04:16:45.116937 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 04:16:45.116957 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 1 04:16:45.117074 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 04:16:45.117174 kernel: ACPI: bus type USB registered Nov 1 04:16:45.117188 kernel: usbcore: registered new interface driver usbfs Nov 1 04:16:45.117200 kernel: usbcore: registered new interface driver hub Nov 1 04:16:45.117212 kernel: usbcore: registered new device driver usb Nov 1 04:16:45.117230 kernel: scsi host0: ahci Nov 1 04:16:45.117348 kernel: scsi host1: ahci Nov 1 04:16:45.117453 kernel: scsi host2: ahci Nov 1 04:16:45.117574 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 1 04:16:45.117677 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Nov 1 04:16:45.117775 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 1 04:16:45.117873 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 1 04:16:45.117980 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Nov 1 04:16:45.118075 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Nov 1 04:16:45.118170 kernel: scsi host3: ahci Nov 1 04:16:45.118271 kernel: hub 1-0:1.0: USB hub found Nov 1 04:16:45.118387 kernel: hub 1-0:1.0: 4 ports detected Nov 1 04:16:45.118500 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 1 04:16:45.118697 kernel: hub 2-0:1.0: USB hub found Nov 1 04:16:45.118813 kernel: hub 2-0:1.0: 4 ports detected Nov 1 04:16:45.118920 kernel: scsi host4: ahci Nov 1 04:16:45.119023 kernel: scsi host5: ahci Nov 1 04:16:45.119183 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Nov 1 04:16:45.119197 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Nov 1 04:16:45.119209 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Nov 1 04:16:45.119225 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Nov 1 04:16:45.119237 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Nov 1 04:16:45.119249 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Nov 1 04:16:45.089875 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 04:16:45.093917 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 04:16:45.103704 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 04:16:45.118393 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 04:16:45.119686 systemd[1]: Starting disk-uuid.service... Nov 1 04:16:45.126231 disk-uuid[510]: Primary Header is updated. Nov 1 04:16:45.126231 disk-uuid[510]: Secondary Entries is updated. Nov 1 04:16:45.126231 disk-uuid[510]: Secondary Header is updated. Nov 1 04:16:45.131533 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 04:16:45.343618 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 1 04:16:45.428601 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 04:16:45.428719 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 04:16:45.435051 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 04:16:45.435183 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 04:16:45.437158 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 04:16:45.439228 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 04:16:45.488615 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 04:16:45.494978 kernel: usbcore: registered new interface driver usbhid Nov 1 04:16:45.495063 kernel: usbhid: USB HID core driver Nov 1 04:16:45.500956 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Nov 1 04:16:45.501020 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Nov 1 04:16:46.139579 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 04:16:46.140256 disk-uuid[514]: The operation has completed successfully. Nov 1 04:16:46.183408 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 04:16:46.184144 systemd[1]: Finished disk-uuid.service. Nov 1 04:16:46.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:46.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:46.185896 systemd[1]: Starting verity-setup.service... Nov 1 04:16:46.202655 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 04:16:46.255740 systemd[1]: Found device dev-mapper-usr.device. Nov 1 04:16:46.257297 systemd[1]: Finished verity-setup.service. Nov 1 04:16:46.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:46.260030 systemd[1]: Mounting sysusr-usr.mount... Nov 1 04:16:46.343601 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 04:16:46.343990 systemd[1]: Mounted sysusr-usr.mount. Nov 1 04:16:46.344901 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 04:16:46.350943 systemd[1]: Starting ignition-setup.service... Nov 1 04:16:46.356708 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 04:16:46.369770 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 04:16:46.369815 kernel: BTRFS info (device vda6): using free space tree Nov 1 04:16:46.369829 kernel: BTRFS info (device vda6): has skinny extents Nov 1 04:16:46.385198 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 04:16:46.391322 systemd[1]: Finished ignition-setup.service. Nov 1 04:16:46.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:46.392666 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 04:16:46.484154 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 04:16:46.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:46.486000 audit: BPF prog-id=9 op=LOAD Nov 1 04:16:46.488692 systemd[1]: Starting systemd-networkd.service... Nov 1 04:16:46.522196 systemd-networkd[714]: lo: Link UP Nov 1 04:16:46.523128 systemd-networkd[714]: lo: Gained carrier Nov 1 04:16:46.524472 systemd-networkd[714]: Enumeration completed Nov 1 04:16:46.525049 systemd[1]: Started systemd-networkd.service. Nov 1 04:16:46.526037 systemd-networkd[714]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 04:16:46.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:46.526174 systemd[1]: Reached target network.target. Nov 1 04:16:46.528407 systemd-networkd[714]: eth0: Link UP Nov 1 04:16:46.528820 systemd-networkd[714]: eth0: Gained carrier Nov 1 04:16:46.530305 systemd[1]: Starting iscsiuio.service... Nov 1 04:16:46.546161 systemd[1]: Started iscsiuio.service. Nov 1 04:16:46.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:46.548150 systemd[1]: Starting iscsid.service... Nov 1 04:16:46.554283 iscsid[719]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 04:16:46.554283 iscsid[719]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Nov 1 04:16:46.554283 iscsid[719]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 04:16:46.554283 iscsid[719]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 04:16:46.554283 iscsid[719]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 04:16:46.554283 iscsid[719]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 04:16:46.554283 iscsid[719]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 04:16:46.558745 systemd[1]: Started iscsid.service. Nov 1 04:16:46.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:46.560699 systemd[1]: Starting dracut-initqueue.service... Nov 1 04:16:46.567288 ignition[636]: Ignition 2.14.0 Nov 1 04:16:46.567299 ignition[636]: Stage: fetch-offline Nov 1 04:16:46.567379 ignition[636]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 04:16:46.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:46.570425 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 04:16:46.567416 ignition[636]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Nov 1 04:16:46.572538 systemd[1]: Starting ignition-fetch.service... Nov 1 04:16:46.569310 ignition[636]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 04:16:46.573675 systemd-networkd[714]: eth0: DHCPv4 address 10.244.102.154/30, gateway 10.244.102.153 acquired from 10.244.102.153 Nov 1 04:16:46.569419 ignition[636]: parsed url from cmdline: "" Nov 1 04:16:46.569423 ignition[636]: no config URL provided Nov 1 04:16:46.569429 ignition[636]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 04:16:46.569437 ignition[636]: no config at "/usr/lib/ignition/user.ign" Nov 1 04:16:46.569442 ignition[636]: failed to fetch config: resource requires networking Nov 1 04:16:46.569571 ignition[636]: Ignition finished successfully Nov 1 04:16:46.582913 ignition[723]: Ignition 2.14.0 Nov 1 04:16:46.582923 ignition[723]: Stage: fetch Nov 1 04:16:46.583029 ignition[723]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 04:16:46.583047 ignition[723]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Nov 1 04:16:46.584239 systemd[1]: Finished dracut-initqueue.service. Nov 1 04:16:46.583872 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 04:16:46.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:46.585807 systemd[1]: Reached target remote-fs-pre.target. Nov 1 04:16:46.583953 ignition[723]: parsed url from cmdline: "" Nov 1 04:16:46.586485 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 04:16:46.583956 ignition[723]: no config URL provided Nov 1 04:16:46.587819 systemd[1]: Reached target remote-fs.target. Nov 1 04:16:46.583962 ignition[723]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 04:16:46.590097 systemd[1]: Starting dracut-pre-mount.service... Nov 1 04:16:46.583969 ignition[723]: no config at "/usr/lib/ignition/user.ign" Nov 1 04:16:46.589018 ignition[723]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Nov 1 04:16:46.589037 ignition[723]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Nov 1 04:16:46.590621 ignition[723]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Nov 1 04:16:46.604299 systemd[1]: Finished dracut-pre-mount.service. Nov 1 04:16:46.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:46.613119 ignition[723]: GET result: OK Nov 1 04:16:46.613744 ignition[723]: parsing config with SHA512: 282fe207576c2fa20cccf7c584baab7386c4171701ab324c60cdee31264dcc77d41de41fc18be1afb3aa9e3d1e4d8b8c5065bd67a6be2c8af43d4b8d8fc986f0 Nov 1 04:16:46.625772 unknown[723]: fetched base config from "system" Nov 1 04:16:46.626545 unknown[723]: fetched base config from "system" Nov 1 04:16:46.627156 unknown[723]: fetched user config from "openstack" Nov 1 04:16:46.628453 ignition[723]: fetch: fetch complete Nov 1 04:16:46.629041 ignition[723]: fetch: fetch passed Nov 1 04:16:46.629649 ignition[723]: Ignition finished successfully Nov 1 04:16:46.631919 systemd[1]: Finished ignition-fetch.service. Nov 1 04:16:46.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:46.633714 systemd[1]: Starting ignition-kargs.service... Nov 1 04:16:46.644836 ignition[739]: Ignition 2.14.0 Nov 1 04:16:46.645477 ignition[739]: Stage: kargs Nov 1 04:16:46.645997 ignition[739]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 04:16:46.646547 ignition[739]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Nov 1 04:16:46.647664 ignition[739]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 04:16:46.649426 ignition[739]: kargs: kargs passed Nov 1 04:16:46.649876 ignition[739]: Ignition finished successfully Nov 1 04:16:46.651106 systemd[1]: Finished ignition-kargs.service. Nov 1 04:16:46.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:46.652420 systemd[1]: Starting ignition-disks.service... Nov 1 04:16:46.661428 ignition[744]: Ignition 2.14.0 Nov 1 04:16:46.661438 ignition[744]: Stage: disks Nov 1 04:16:46.661571 ignition[744]: reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 04:16:46.661589 ignition[744]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Nov 1 04:16:46.662600 ignition[744]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 04:16:46.664194 ignition[744]: disks: disks passed Nov 1 04:16:46.664245 ignition[744]: Ignition finished successfully Nov 1 04:16:46.665272 systemd[1]: Finished ignition-disks.service. Nov 1 04:16:46.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:46.666212 systemd[1]: Reached target initrd-root-device.target. Nov 1 04:16:46.666967 systemd[1]: Reached target local-fs-pre.target. Nov 1 04:16:46.667757 systemd[1]: Reached target local-fs.target. Nov 1 04:16:46.668487 systemd[1]: Reached target sysinit.target. Nov 1 04:16:46.669211 systemd[1]: Reached target basic.target. Nov 1 04:16:46.670921 systemd[1]: Starting systemd-fsck-root.service... Nov 1 04:16:46.687108 systemd-fsck[751]: ROOT: clean, 637/1628000 files, 124069/1617920 blocks Nov 1 04:16:46.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:46.691083 systemd[1]: Finished systemd-fsck-root.service. Nov 1 04:16:46.695469 systemd[1]: Mounting sysroot.mount... Nov 1 04:16:46.704580 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 04:16:46.705540 systemd[1]: Mounted sysroot.mount. Nov 1 04:16:46.706337 systemd[1]: Reached target initrd-root-fs.target. Nov 1 04:16:46.708203 systemd[1]: Mounting sysroot-usr.mount... Nov 1 04:16:46.709242 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Nov 1 04:16:46.710153 systemd[1]: Starting flatcar-openstack-hostname.service... Nov 1 04:16:46.711721 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 04:16:46.711764 systemd[1]: Reached target ignition-diskful.target. Nov 1 04:16:46.725842 systemd[1]: Mounted sysroot-usr.mount. Nov 1 04:16:46.727712 systemd[1]: Starting initrd-setup-root.service... Nov 1 04:16:46.734455 initrd-setup-root[762]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 04:16:46.743521 initrd-setup-root[770]: cut: /sysroot/etc/group: No such file or directory Nov 1 04:16:46.754120 initrd-setup-root[779]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 04:16:46.763201 initrd-setup-root[788]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 04:16:46.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:46.803903 systemd[1]: Finished initrd-setup-root.service. Nov 1 04:16:46.805185 systemd[1]: Starting ignition-mount.service... Nov 1 04:16:46.806347 systemd[1]: Starting sysroot-boot.service... Nov 1 04:16:46.823553 bash[806]: umount: /sysroot/usr/share/oem: not mounted. Nov 1 04:16:46.829178 coreos-metadata[757]: Nov 01 04:16:46.829 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 1 04:16:46.834506 ignition[807]: INFO : Ignition 2.14.0 Nov 1 04:16:46.835142 ignition[807]: INFO : Stage: mount Nov 1 04:16:46.835701 ignition[807]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 04:16:46.836587 ignition[807]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Nov 1 04:16:46.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:46.838907 systemd[1]: Finished sysroot-boot.service. Nov 1 04:16:46.839951 ignition[807]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 04:16:46.841767 ignition[807]: INFO : mount: mount passed Nov 1 04:16:46.842219 ignition[807]: INFO : Ignition finished successfully Nov 1 04:16:46.843360 systemd[1]: Finished ignition-mount.service. Nov 1 04:16:46.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:46.898211 coreos-metadata[757]: Nov 01 04:16:46.898 INFO Fetch successful Nov 1 04:16:46.899671 coreos-metadata[757]: Nov 01 04:16:46.899 INFO wrote hostname srv-i9e8z.gb1.brightbox.com to /sysroot/etc/hostname Nov 1 04:16:46.904740 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Nov 1 04:16:46.904982 systemd[1]: Finished flatcar-openstack-hostname.service. Nov 1 04:16:46.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:46.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:47.274798 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 04:16:47.287347 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (814) Nov 1 04:16:47.287446 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 04:16:47.287484 kernel: BTRFS info (device vda6): using free space tree Nov 1 04:16:47.288124 kernel: BTRFS info (device vda6): has skinny extents Nov 1 04:16:47.294174 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 04:16:47.295963 systemd[1]: Starting ignition-files.service... Nov 1 04:16:47.316294 ignition[834]: INFO : Ignition 2.14.0 Nov 1 04:16:47.316294 ignition[834]: INFO : Stage: files Nov 1 04:16:47.317283 ignition[834]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 04:16:47.317283 ignition[834]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Nov 1 04:16:47.318456 ignition[834]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 04:16:47.319175 ignition[834]: DEBUG : files: compiled without relabeling support, skipping Nov 1 04:16:47.321684 ignition[834]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 04:16:47.321684 ignition[834]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 04:16:47.325738 ignition[834]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 04:16:47.325738 ignition[834]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 04:16:47.329489 ignition[834]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 04:16:47.327639 unknown[834]: wrote ssh authorized keys file for user: core Nov 1 04:16:47.333311 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 04:16:47.334691 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 04:16:47.334691 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 04:16:47.334691 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 04:16:47.508817 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 04:16:47.769711 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 04:16:47.772073 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 1 04:16:47.772073 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 04:16:47.772073 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 04:16:47.772073 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 04:16:47.772073 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 04:16:47.778607 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 04:16:47.778607 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 04:16:47.778607 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 04:16:47.778607 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 04:16:47.778607 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 04:16:47.778607 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 04:16:47.778607 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 04:16:47.778607 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 04:16:47.778607 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 04:16:48.087900 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 1 04:16:48.370826 systemd-networkd[714]: eth0: Gained IPv6LL Nov 1 04:16:49.514853 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 04:16:49.514853 ignition[834]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Nov 1 04:16:49.514853 ignition[834]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Nov 1 04:16:49.514853 ignition[834]: INFO : files: op(d): [started] processing unit "containerd.service" Nov 1 04:16:49.519933 ignition[834]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 04:16:49.519933 ignition[834]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 04:16:49.519933 ignition[834]: INFO : files: op(d): [finished] processing unit "containerd.service" Nov 1 04:16:49.519933 ignition[834]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Nov 1 04:16:49.519933 ignition[834]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 04:16:49.519933 ignition[834]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 04:16:49.519933 ignition[834]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Nov 1 04:16:49.519933 ignition[834]: INFO : files: op(11): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 04:16:49.519933 ignition[834]: INFO : files: op(11): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Nov 1 04:16:49.519933 ignition[834]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 1 04:16:49.519933 ignition[834]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 04:16:49.534975 ignition[834]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 04:16:49.534975 ignition[834]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 04:16:49.534975 ignition[834]: INFO : files: files passed Nov 1 04:16:49.534975 ignition[834]: INFO : Ignition finished successfully Nov 1 04:16:49.548198 kernel: kauditd_printk_skb: 26 callbacks suppressed Nov 1 04:16:49.548226 kernel: audit: type=1130 audit(1761970609.537:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.535075 systemd[1]: Finished ignition-files.service. Nov 1 04:16:49.540292 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 04:16:49.546589 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 04:16:49.548583 systemd[1]: Starting ignition-quench.service... Nov 1 04:16:49.552731 initrd-setup-root-after-ignition[859]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 04:16:49.558764 kernel: audit: type=1130 audit(1761970609.552:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.558789 kernel: audit: type=1131 audit(1761970609.552:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.552219 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 04:16:49.552342 systemd[1]: Finished ignition-quench.service. Nov 1 04:16:49.552962 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 04:16:49.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.561143 systemd[1]: Reached target ignition-complete.target. Nov 1 04:16:49.573641 kernel: audit: type=1130 audit(1761970609.560:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.575660 systemd[1]: Starting initrd-parse-etc.service... Nov 1 04:16:49.596654 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 04:16:49.597744 systemd[1]: Finished initrd-parse-etc.service. Nov 1 04:16:49.611503 kernel: audit: type=1130 audit(1761970609.597:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.611532 kernel: audit: type=1131 audit(1761970609.597:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.598631 systemd[1]: Reached target initrd-fs.target. Nov 1 04:16:49.612404 systemd[1]: Reached target initrd.target. Nov 1 04:16:49.613711 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 04:16:49.615619 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 04:16:49.630049 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 04:16:49.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.639866 systemd[1]: Starting initrd-cleanup.service... Nov 1 04:16:49.640607 kernel: audit: type=1130 audit(1761970609.631:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.658833 systemd[1]: Stopped target nss-lookup.target. Nov 1 04:16:49.660966 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 04:16:49.661894 systemd[1]: Stopped target timers.target. Nov 1 04:16:49.662727 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 04:16:49.663277 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 04:16:49.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.664321 systemd[1]: Stopped target initrd.target. Nov 1 04:16:49.667569 kernel: audit: type=1131 audit(1761970609.663:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.667733 systemd[1]: Stopped target basic.target. Nov 1 04:16:49.668551 systemd[1]: Stopped target ignition-complete.target. Nov 1 04:16:49.669409 systemd[1]: Stopped target ignition-diskful.target. Nov 1 04:16:49.670264 systemd[1]: Stopped target initrd-root-device.target. Nov 1 04:16:49.670735 systemd[1]: Stopped target remote-fs.target. Nov 1 04:16:49.671425 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 04:16:49.672112 systemd[1]: Stopped target sysinit.target. Nov 1 04:16:49.672805 systemd[1]: Stopped target local-fs.target. Nov 1 04:16:49.673445 systemd[1]: Stopped target local-fs-pre.target. Nov 1 04:16:49.674104 systemd[1]: Stopped target swap.target. Nov 1 04:16:49.674741 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 04:16:49.678292 kernel: audit: type=1131 audit(1761970609.674:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.674858 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 04:16:49.675500 systemd[1]: Stopped target cryptsetup.target. Nov 1 04:16:49.682169 kernel: audit: type=1131 audit(1761970609.678:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.678670 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 04:16:49.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.678770 systemd[1]: Stopped dracut-initqueue.service. Nov 1 04:16:49.679451 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 04:16:49.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.679568 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 04:16:49.682653 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 04:16:49.682751 systemd[1]: Stopped ignition-files.service. Nov 1 04:16:49.684325 systemd[1]: Stopping ignition-mount.service... Nov 1 04:16:49.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.687658 systemd[1]: Stopping iscsiuio.service... Nov 1 04:16:49.688055 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 04:16:49.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.702357 ignition[872]: INFO : Ignition 2.14.0 Nov 1 04:16:49.702357 ignition[872]: INFO : Stage: umount Nov 1 04:16:49.702357 ignition[872]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Nov 1 04:16:49.702357 ignition[872]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Nov 1 04:16:49.702357 ignition[872]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 04:16:49.702357 ignition[872]: INFO : umount: umount passed Nov 1 04:16:49.702357 ignition[872]: INFO : Ignition finished successfully Nov 1 04:16:49.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.688179 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 04:16:49.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.690497 systemd[1]: Stopping sysroot-boot.service... Nov 1 04:16:49.690963 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 04:16:49.691146 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 04:16:49.691749 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 04:16:49.691891 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 04:16:49.694211 systemd[1]: iscsiuio.service: Deactivated successfully. Nov 1 04:16:49.694358 systemd[1]: Stopped iscsiuio.service. Nov 1 04:16:49.702397 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 04:16:49.702491 systemd[1]: Stopped ignition-mount.service. Nov 1 04:16:49.703540 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 04:16:49.703657 systemd[1]: Stopped ignition-disks.service. Nov 1 04:16:49.705004 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 04:16:49.705041 systemd[1]: Stopped ignition-kargs.service. Nov 1 04:16:49.705969 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 04:16:49.706005 systemd[1]: Stopped ignition-fetch.service. Nov 1 04:16:49.706757 systemd[1]: Stopped target network.target. Nov 1 04:16:49.707398 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 04:16:49.707440 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 04:16:49.708134 systemd[1]: Stopped target paths.target. Nov 1 04:16:49.708742 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 04:16:49.714599 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 04:16:49.721080 systemd[1]: Stopped target slices.target. Nov 1 04:16:49.722577 systemd[1]: Stopped target sockets.target. Nov 1 04:16:49.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.723836 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 04:16:49.723873 systemd[1]: Closed iscsid.socket. Nov 1 04:16:49.725345 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 04:16:49.725386 systemd[1]: Closed iscsiuio.socket. Nov 1 04:16:49.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.726602 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 04:16:49.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.726646 systemd[1]: Stopped ignition-setup.service. Nov 1 04:16:49.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.728014 systemd[1]: Stopping systemd-networkd.service... Nov 1 04:16:49.731059 systemd[1]: Stopping systemd-resolved.service... Nov 1 04:16:49.732654 systemd-networkd[714]: eth0: DHCPv6 lease lost Nov 1 04:16:49.752000 audit: BPF prog-id=9 op=UNLOAD Nov 1 04:16:49.735795 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 04:16:49.737344 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 04:16:49.737632 systemd[1]: Stopped systemd-networkd.service. Nov 1 04:16:49.740995 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 04:16:49.741230 systemd[1]: Finished initrd-cleanup.service. Nov 1 04:16:49.742811 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 04:16:49.742852 systemd[1]: Closed systemd-networkd.socket. Nov 1 04:16:49.744242 systemd[1]: Stopping network-cleanup.service... Nov 1 04:16:49.747676 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 04:16:49.747743 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 04:16:49.748269 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 04:16:49.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.748314 systemd[1]: Stopped systemd-sysctl.service. Nov 1 04:16:49.749066 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 04:16:49.749113 systemd[1]: Stopped systemd-modules-load.service. Nov 1 04:16:49.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.755350 systemd[1]: Stopping systemd-udevd.service... Nov 1 04:16:49.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.778000 audit: BPF prog-id=6 op=UNLOAD Nov 1 04:16:49.756878 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 04:16:49.757419 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 04:16:49.757508 systemd[1]: Stopped systemd-resolved.service. Nov 1 04:16:49.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.760385 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 04:16:49.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.760512 systemd[1]: Stopped sysroot-boot.service. Nov 1 04:16:49.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.763352 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 04:16:49.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.766763 systemd[1]: Stopped systemd-udevd.service. Nov 1 04:16:49.778137 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 04:16:49.778236 systemd[1]: Stopped network-cleanup.service. Nov 1 04:16:49.778937 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 04:16:49.778976 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 04:16:49.779528 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 04:16:49.779601 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 04:16:49.780629 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 04:16:49.780670 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 04:16:49.781625 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 04:16:49.781662 systemd[1]: Stopped dracut-cmdline.service. Nov 1 04:16:49.782244 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 04:16:49.782278 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 04:16:49.782950 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 04:16:49.782985 systemd[1]: Stopped initrd-setup-root.service. Nov 1 04:16:49.784528 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 04:16:49.791985 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 04:16:49.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.792056 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 04:16:49.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:49.793018 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 04:16:49.793104 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 04:16:49.793868 systemd[1]: Reached target initrd-switch-root.target. Nov 1 04:16:49.795734 systemd[1]: Starting initrd-switch-root.service... Nov 1 04:16:49.804000 audit: BPF prog-id=5 op=UNLOAD Nov 1 04:16:49.804000 audit: BPF prog-id=4 op=UNLOAD Nov 1 04:16:49.804000 audit: BPF prog-id=3 op=UNLOAD Nov 1 04:16:49.804082 systemd[1]: Switching root. Nov 1 04:16:49.809000 audit: BPF prog-id=8 op=UNLOAD Nov 1 04:16:49.809000 audit: BPF prog-id=7 op=UNLOAD Nov 1 04:16:49.827319 iscsid[719]: iscsid shutting down. Nov 1 04:16:49.828593 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Nov 1 04:16:49.828736 systemd-journald[201]: Journal stopped Nov 1 04:16:52.994969 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 04:16:52.995128 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 04:16:52.995146 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 04:16:52.995165 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 04:16:52.995178 kernel: SELinux: policy capability open_perms=1 Nov 1 04:16:52.995190 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 04:16:52.995205 kernel: SELinux: policy capability always_check_network=0 Nov 1 04:16:52.995220 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 04:16:52.995237 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 04:16:52.995250 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 04:16:52.995263 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 04:16:52.995277 systemd[1]: Successfully loaded SELinux policy in 49.023ms. Nov 1 04:16:52.995306 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.448ms. Nov 1 04:16:52.995320 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 04:16:52.995336 systemd[1]: Detected virtualization kvm. Nov 1 04:16:52.995349 systemd[1]: Detected architecture x86-64. Nov 1 04:16:52.995364 systemd[1]: Detected first boot. Nov 1 04:16:52.995379 systemd[1]: Hostname set to . Nov 1 04:16:52.995393 systemd[1]: Initializing machine ID from VM UUID. Nov 1 04:16:52.995409 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 04:16:52.995425 systemd[1]: Populated /etc with preset unit settings. Nov 1 04:16:52.995440 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 04:16:52.995455 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 04:16:52.995471 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 04:16:52.995489 systemd[1]: Queued start job for default target multi-user.target. Nov 1 04:16:52.995510 systemd[1]: Unnecessary job was removed for dev-vda6.device. Nov 1 04:16:52.995524 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 04:16:52.995538 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 04:16:52.995552 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Nov 1 04:16:52.995646 systemd[1]: Created slice system-getty.slice. Nov 1 04:16:52.995660 systemd[1]: Created slice system-modprobe.slice. Nov 1 04:16:52.995674 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 04:16:52.995693 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 04:16:52.995707 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 04:16:52.995724 systemd[1]: Created slice user.slice. Nov 1 04:16:52.995741 systemd[1]: Started systemd-ask-password-console.path. Nov 1 04:16:52.995756 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 04:16:52.995770 systemd[1]: Set up automount boot.automount. Nov 1 04:16:52.995783 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 04:16:52.995797 systemd[1]: Reached target integritysetup.target. Nov 1 04:16:52.995812 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 04:16:52.995829 systemd[1]: Reached target remote-fs.target. Nov 1 04:16:52.995842 systemd[1]: Reached target slices.target. Nov 1 04:16:52.995856 systemd[1]: Reached target swap.target. Nov 1 04:16:52.995870 systemd[1]: Reached target torcx.target. Nov 1 04:16:52.995884 systemd[1]: Reached target veritysetup.target. Nov 1 04:16:52.995898 systemd[1]: Listening on systemd-coredump.socket. Nov 1 04:16:52.995921 systemd[1]: Listening on systemd-initctl.socket. Nov 1 04:16:52.995939 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 04:16:52.995954 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 04:16:52.995970 systemd[1]: Listening on systemd-journald.socket. Nov 1 04:16:52.995984 systemd[1]: Listening on systemd-networkd.socket. Nov 1 04:16:52.995998 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 04:16:52.996011 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 04:16:52.996027 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 04:16:52.996041 systemd[1]: Mounting dev-hugepages.mount... Nov 1 04:16:52.996058 systemd[1]: Mounting dev-mqueue.mount... Nov 1 04:16:52.996072 systemd[1]: Mounting media.mount... Nov 1 04:16:52.996086 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 04:16:52.996100 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 04:16:52.996116 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 04:16:52.996130 systemd[1]: Mounting tmp.mount... Nov 1 04:16:52.996144 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 04:16:52.996158 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 04:16:52.996171 systemd[1]: Starting kmod-static-nodes.service... Nov 1 04:16:52.996185 systemd[1]: Starting modprobe@configfs.service... Nov 1 04:16:52.996199 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 04:16:52.996213 systemd[1]: Starting modprobe@drm.service... Nov 1 04:16:52.996226 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 04:16:52.996243 systemd[1]: Starting modprobe@fuse.service... Nov 1 04:16:52.996257 systemd[1]: Starting modprobe@loop.service... Nov 1 04:16:52.996271 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 04:16:52.996285 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 1 04:16:52.996298 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Nov 1 04:16:52.996311 systemd[1]: Starting systemd-journald.service... Nov 1 04:16:52.996327 systemd[1]: Starting systemd-modules-load.service... Nov 1 04:16:52.996341 systemd[1]: Starting systemd-network-generator.service... Nov 1 04:16:52.996357 systemd[1]: Starting systemd-remount-fs.service... Nov 1 04:16:52.996370 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 04:16:52.996384 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 04:16:52.996398 systemd[1]: Mounted dev-hugepages.mount. Nov 1 04:16:52.996411 systemd[1]: Mounted dev-mqueue.mount. Nov 1 04:16:52.996425 systemd[1]: Mounted media.mount. Nov 1 04:16:52.996439 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 04:16:52.996452 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 04:16:52.996465 systemd[1]: Mounted tmp.mount. Nov 1 04:16:52.996479 systemd[1]: Finished kmod-static-nodes.service. Nov 1 04:16:52.996495 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 04:16:52.996510 kernel: loop: module loaded Nov 1 04:16:52.996523 systemd[1]: Finished modprobe@configfs.service. Nov 1 04:16:52.996537 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 04:16:52.996550 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 04:16:52.996694 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 04:16:52.996709 systemd[1]: Finished modprobe@drm.service. Nov 1 04:16:52.996722 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 04:16:52.996736 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 04:16:52.996757 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 04:16:52.996771 kernel: fuse: init (API version 7.34) Nov 1 04:16:52.996785 systemd[1]: Finished modprobe@loop.service. Nov 1 04:16:52.996798 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 04:16:52.996815 systemd[1]: Finished modprobe@fuse.service. Nov 1 04:16:52.996829 systemd[1]: Finished systemd-modules-load.service. Nov 1 04:16:52.996844 systemd[1]: Finished systemd-network-generator.service. Nov 1 04:16:52.996857 systemd[1]: Finished systemd-remount-fs.service. Nov 1 04:16:52.996870 systemd[1]: Reached target network-pre.target. Nov 1 04:16:52.996888 systemd-journald[1017]: Journal started Nov 1 04:16:52.996948 systemd-journald[1017]: Runtime Journal (/run/log/journal/1445c84aa852424abb308b08b3c3cbea) is 4.7M, max 38.1M, 33.3M free. Nov 1 04:16:52.809000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Nov 1 04:16:52.809000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Nov 1 04:16:52.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:52.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:52.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:52.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:52.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:52.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:52.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:52.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:52.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:52.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:52.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:52.983000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 04:16:52.983000 audit[1017]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc538e4c40 a2=4000 a3=7ffc538e4cdc items=0 ppid=1 pid=1017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:16:52.983000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 04:16:52.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:52.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:52.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:52.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:52.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:53.005569 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 04:16:53.005620 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 04:16:53.005640 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 04:16:53.012574 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 04:16:53.017247 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 04:16:53.020689 systemd[1]: Starting systemd-random-seed.service... Nov 1 04:16:53.020727 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 04:16:53.024784 systemd[1]: Starting systemd-sysctl.service... Nov 1 04:16:53.026747 systemd[1]: Started systemd-journald.service. Nov 1 04:16:53.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:53.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:53.029779 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 04:16:53.030334 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 04:16:53.035682 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 04:16:53.039712 systemd[1]: Starting systemd-journal-flush.service... Nov 1 04:16:53.046065 systemd[1]: Starting systemd-sysusers.service... Nov 1 04:16:53.047920 systemd[1]: Finished systemd-random-seed.service. Nov 1 04:16:53.048427 systemd[1]: Reached target first-boot-complete.target. Nov 1 04:16:53.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:53.056672 systemd[1]: Finished systemd-sysctl.service. Nov 1 04:16:53.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:53.058125 systemd-journald[1017]: Time spent on flushing to /var/log/journal/1445c84aa852424abb308b08b3c3cbea is 51.463ms for 1249 entries. Nov 1 04:16:53.058125 systemd-journald[1017]: System Journal (/var/log/journal/1445c84aa852424abb308b08b3c3cbea) is 8.0M, max 584.8M, 576.8M free. Nov 1 04:16:53.123710 systemd-journald[1017]: Received client request to flush runtime journal. Nov 1 04:16:53.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:53.082951 systemd[1]: Finished systemd-sysusers.service. Nov 1 04:16:53.084959 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 04:16:53.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:53.129813 systemd[1]: Finished systemd-journal-flush.service. Nov 1 04:16:53.134269 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 04:16:53.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:53.136465 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 04:16:53.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:53.138211 systemd[1]: Starting systemd-udev-settle.service... Nov 1 04:16:53.149267 udevadm[1070]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 1 04:16:53.653543 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 04:16:53.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:53.657491 systemd[1]: Starting systemd-udevd.service... Nov 1 04:16:53.683796 systemd-udevd[1073]: Using default interface naming scheme 'v252'. Nov 1 04:16:53.707578 systemd[1]: Started systemd-udevd.service. Nov 1 04:16:53.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:53.714242 systemd[1]: Starting systemd-networkd.service... Nov 1 04:16:53.723789 systemd[1]: Starting systemd-userdbd.service... Nov 1 04:16:53.763873 systemd[1]: Found device dev-ttyS0.device. Nov 1 04:16:53.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:53.775824 systemd[1]: Started systemd-userdbd.service. Nov 1 04:16:53.864635 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 04:16:53.870785 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 1 04:16:53.884388 systemd-networkd[1086]: lo: Link UP Nov 1 04:16:53.884753 systemd-networkd[1086]: lo: Gained carrier Nov 1 04:16:53.885390 systemd-networkd[1086]: Enumeration completed Nov 1 04:16:53.886805 kernel: ACPI: button: Power Button [PWRF] Nov 1 04:16:53.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:53.885618 systemd[1]: Started systemd-networkd.service. Nov 1 04:16:53.887124 systemd-networkd[1086]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 04:16:53.888650 systemd-networkd[1086]: eth0: Link UP Nov 1 04:16:53.888751 systemd-networkd[1086]: eth0: Gained carrier Nov 1 04:16:53.892606 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 04:16:53.897741 systemd-networkd[1086]: eth0: DHCPv4 address 10.244.102.154/30, gateway 10.244.102.153 acquired from 10.244.102.153 Nov 1 04:16:53.920000 audit[1084]: AVC avc: denied { confidentiality } for pid=1084 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Nov 1 04:16:53.920000 audit[1084]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55d97e9907a0 a1=338ec a2=7f3cbefe9bc5 a3=5 items=110 ppid=1073 pid=1084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:16:53.920000 audit: CWD cwd="/" Nov 1 04:16:53.920000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=1 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=2 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=3 name=(null) inode=15563 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=4 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=5 name=(null) inode=15564 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=6 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=7 name=(null) inode=15565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=8 name=(null) inode=15565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=9 name=(null) inode=15566 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=10 name=(null) inode=15565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=11 name=(null) inode=15567 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=12 name=(null) inode=15565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=13 name=(null) inode=15568 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=14 name=(null) inode=15565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=15 name=(null) inode=15569 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=16 name=(null) inode=15565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=17 name=(null) inode=15570 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=18 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=19 name=(null) inode=15571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=20 name=(null) inode=15571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=21 name=(null) inode=15572 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=22 name=(null) inode=15571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=23 name=(null) inode=15573 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=24 name=(null) inode=15571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=25 name=(null) inode=15574 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=26 name=(null) inode=15571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=27 name=(null) inode=15575 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=28 name=(null) inode=15571 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=29 name=(null) inode=15576 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=30 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=31 name=(null) inode=15577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=32 name=(null) inode=15577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=33 name=(null) inode=15578 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=34 name=(null) inode=15577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=35 name=(null) inode=15579 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=36 name=(null) inode=15577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=37 name=(null) inode=15580 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=38 name=(null) inode=15577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=39 name=(null) inode=15581 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=40 name=(null) inode=15577 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=41 name=(null) inode=15582 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=42 name=(null) inode=15562 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=43 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=44 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=45 name=(null) inode=15584 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=46 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=47 name=(null) inode=15585 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=48 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=49 name=(null) inode=15586 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=50 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=51 name=(null) inode=15587 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=52 name=(null) inode=15583 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=53 name=(null) inode=15588 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=55 name=(null) inode=15589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=56 name=(null) inode=15589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=57 name=(null) inode=15590 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=58 name=(null) inode=15589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=59 name=(null) inode=15591 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=60 name=(null) inode=15589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=61 name=(null) inode=15592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=62 name=(null) inode=15592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=63 name=(null) inode=15593 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=64 name=(null) inode=15592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=65 name=(null) inode=15594 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=66 name=(null) inode=15592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=67 name=(null) inode=15595 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=68 name=(null) inode=15592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=69 name=(null) inode=15596 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=70 name=(null) inode=15592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=71 name=(null) inode=15597 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=72 name=(null) inode=15589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=73 name=(null) inode=15598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=74 name=(null) inode=15598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=75 name=(null) inode=15599 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=76 name=(null) inode=15598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=77 name=(null) inode=15600 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=78 name=(null) inode=15598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=79 name=(null) inode=15601 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=80 name=(null) inode=15598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=81 name=(null) inode=15602 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=82 name=(null) inode=15598 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=83 name=(null) inode=15603 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=84 name=(null) inode=15589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=85 name=(null) inode=15604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=86 name=(null) inode=15604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=87 name=(null) inode=15605 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=88 name=(null) inode=15604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=89 name=(null) inode=15606 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=90 name=(null) inode=15604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=91 name=(null) inode=15607 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=92 name=(null) inode=15604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=93 name=(null) inode=15608 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=94 name=(null) inode=15604 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=95 name=(null) inode=15609 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=96 name=(null) inode=15589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=97 name=(null) inode=15610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=98 name=(null) inode=15610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=99 name=(null) inode=15611 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=100 name=(null) inode=15610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=101 name=(null) inode=15612 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=102 name=(null) inode=15610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=103 name=(null) inode=15613 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=104 name=(null) inode=15610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=105 name=(null) inode=15614 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=106 name=(null) inode=15610 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=107 name=(null) inode=15615 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PATH item=109 name=(null) inode=15616 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:16:53.920000 audit: PROCTITLE proctitle="(udev-worker)" Nov 1 04:16:53.962607 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 1 04:16:53.977577 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 04:16:53.980886 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 1 04:16:53.981043 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 04:16:54.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.126894 systemd[1]: Finished systemd-udev-settle.service. Nov 1 04:16:54.131181 systemd[1]: Starting lvm2-activation-early.service... Nov 1 04:16:54.155055 lvm[1103]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 04:16:54.191344 systemd[1]: Finished lvm2-activation-early.service. Nov 1 04:16:54.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.192724 systemd[1]: Reached target cryptsetup.target. Nov 1 04:16:54.196809 systemd[1]: Starting lvm2-activation.service... Nov 1 04:16:54.202668 lvm[1105]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 04:16:54.223089 systemd[1]: Finished lvm2-activation.service. Nov 1 04:16:54.224399 systemd[1]: Reached target local-fs-pre.target. Nov 1 04:16:54.225428 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 04:16:54.225497 systemd[1]: Reached target local-fs.target. Nov 1 04:16:54.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.226537 systemd[1]: Reached target machines.target. Nov 1 04:16:54.230976 systemd[1]: Starting ldconfig.service... Nov 1 04:16:54.232428 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 04:16:54.232523 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 04:16:54.234254 systemd[1]: Starting systemd-boot-update.service... Nov 1 04:16:54.235933 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 04:16:54.238014 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 04:16:54.240133 systemd[1]: Starting systemd-sysext.service... Nov 1 04:16:54.253759 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1108 (bootctl) Nov 1 04:16:54.255113 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 04:16:54.263035 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 04:16:54.268202 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 04:16:54.268452 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 04:16:54.286635 kernel: loop0: detected capacity change from 0 to 224512 Nov 1 04:16:54.292905 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 04:16:54.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.294263 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 04:16:54.301239 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 04:16:54.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.314760 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 04:16:54.339637 kernel: loop1: detected capacity change from 0 to 224512 Nov 1 04:16:54.356744 (sd-sysext)[1125]: Using extensions 'kubernetes'. Nov 1 04:16:54.359630 (sd-sysext)[1125]: Merged extensions into '/usr'. Nov 1 04:16:54.371436 systemd-fsck[1121]: fsck.fat 4.2 (2021-01-31) Nov 1 04:16:54.371436 systemd-fsck[1121]: /dev/vda1: 790 files, 120773/258078 clusters Nov 1 04:16:54.389483 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 04:16:54.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.402203 systemd[1]: Mounting boot.mount... Nov 1 04:16:54.403418 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 04:16:54.408362 systemd[1]: Mounting usr-share-oem.mount... Nov 1 04:16:54.409800 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 04:16:54.411323 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 04:16:54.415202 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 04:16:54.418691 systemd[1]: Starting modprobe@loop.service... Nov 1 04:16:54.419426 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 04:16:54.419721 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 04:16:54.420045 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 04:16:54.428898 systemd[1]: Mounted usr-share-oem.mount. Nov 1 04:16:54.430980 systemd[1]: Finished systemd-sysext.service. Nov 1 04:16:54.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.435638 systemd[1]: Mounted boot.mount. Nov 1 04:16:54.439361 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 04:16:54.439532 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 04:16:54.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.441965 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 04:16:54.442119 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 04:16:54.444037 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 04:16:54.444215 systemd[1]: Finished modprobe@loop.service. Nov 1 04:16:54.451726 systemd[1]: Starting ensure-sysext.service... Nov 1 04:16:54.452443 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 04:16:54.452724 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 04:16:54.456089 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 04:16:54.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.458037 systemd[1]: Finished systemd-boot-update.service. Nov 1 04:16:54.467828 systemd[1]: Reloading. Nov 1 04:16:54.474927 systemd-tmpfiles[1144]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 04:16:54.479342 systemd-tmpfiles[1144]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 04:16:54.484991 systemd-tmpfiles[1144]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 04:16:54.560104 /usr/lib/systemd/system-generators/torcx-generator[1164]: time="2025-11-01T04:16:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 04:16:54.560141 /usr/lib/systemd/system-generators/torcx-generator[1164]: time="2025-11-01T04:16:54Z" level=info msg="torcx already run" Nov 1 04:16:54.578672 ldconfig[1107]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 04:16:54.671408 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 04:16:54.671443 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 04:16:54.691192 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 04:16:54.758332 systemd[1]: Finished ldconfig.service. Nov 1 04:16:54.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.759645 kernel: kauditd_printk_skb: 208 callbacks suppressed Nov 1 04:16:54.759767 kernel: audit: type=1130 audit(1761970614.758:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.767595 kernel: audit: type=1130 audit(1761970614.763:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.763902 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 04:16:54.769961 systemd[1]: Starting audit-rules.service... Nov 1 04:16:54.771706 systemd[1]: Starting clean-ca-certificates.service... Nov 1 04:16:54.773716 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 04:16:54.775939 systemd[1]: Starting systemd-resolved.service... Nov 1 04:16:54.782085 systemd[1]: Starting systemd-timesyncd.service... Nov 1 04:16:54.784930 systemd[1]: Starting systemd-update-utmp.service... Nov 1 04:16:54.788954 systemd[1]: Finished clean-ca-certificates.service. Nov 1 04:16:54.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.803594 kernel: audit: type=1130 audit(1761970614.791:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.800236 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 04:16:54.803000 audit[1225]: SYSTEM_BOOT pid=1225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.805536 systemd[1]: Finished systemd-update-utmp.service. Nov 1 04:16:54.808953 kernel: audit: type=1127 audit(1761970614.803:136): pid=1225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.813569 kernel: audit: type=1130 audit(1761970614.808:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.826300 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 04:16:54.827888 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 04:16:54.829686 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 04:16:54.832734 systemd[1]: Starting modprobe@loop.service... Nov 1 04:16:54.833991 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 04:16:54.834173 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 04:16:54.834313 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 04:16:54.835182 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 04:16:54.835341 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 04:16:54.849034 kernel: audit: type=1130 audit(1761970614.841:138): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.849111 kernel: audit: type=1131 audit(1761970614.841:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.842163 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 04:16:54.842306 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 04:16:54.849789 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 04:16:54.849950 systemd[1]: Finished modprobe@loop.service. Nov 1 04:16:54.857052 kernel: audit: type=1130 audit(1761970614.848:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.850614 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 04:16:54.850715 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 04:16:54.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.858385 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 04:16:54.859908 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 04:16:54.866335 kernel: audit: type=1131 audit(1761970614.848:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.866397 kernel: audit: type=1130 audit(1761970614.849:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.863423 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 04:16:54.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.871642 systemd[1]: Starting modprobe@loop.service... Nov 1 04:16:54.872099 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 04:16:54.872261 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 04:16:54.872423 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 04:16:54.873481 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 04:16:54.873666 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 04:16:54.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.875725 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 04:16:54.875883 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 04:16:54.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.876545 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 04:16:54.879210 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 04:16:54.880525 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 04:16:54.884137 systemd[1]: Starting modprobe@drm.service... Nov 1 04:16:54.889579 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 04:16:54.891719 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 04:16:54.891900 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 04:16:54.900811 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 04:16:54.901354 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 04:16:54.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.904138 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 04:16:54.905005 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 04:16:54.905162 systemd[1]: Finished modprobe@loop.service. Nov 1 04:16:54.905919 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 04:16:54.906069 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 04:16:54.907233 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 04:16:54.909459 systemd[1]: Starting systemd-update-done.service... Nov 1 04:16:54.910403 systemd[1]: Finished ensure-sysext.service. Nov 1 04:16:54.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.926426 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 04:16:54.926607 systemd[1]: Finished modprobe@drm.service. Nov 1 04:16:54.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.928171 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 04:16:54.928339 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 04:16:54.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.928865 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 04:16:54.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:16:54.934281 systemd[1]: Finished systemd-update-done.service. Nov 1 04:16:54.954859 augenrules[1266]: No rules Nov 1 04:16:54.954000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 04:16:54.954000 audit[1266]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc6d81d0f0 a2=420 a3=0 items=0 ppid=1219 pid=1266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:16:54.954000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 04:16:54.955522 systemd[1]: Finished audit-rules.service. Nov 1 04:16:54.967700 systemd[1]: Started systemd-timesyncd.service. Nov 1 04:16:54.968344 systemd[1]: Reached target time-set.target. Nov 1 04:16:54.983139 systemd-resolved[1222]: Positive Trust Anchors: Nov 1 04:16:54.983158 systemd-resolved[1222]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 04:16:54.983197 systemd-resolved[1222]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 04:16:54.990005 systemd-resolved[1222]: Using system hostname 'srv-i9e8z.gb1.brightbox.com'. Nov 1 04:16:54.992074 systemd[1]: Started systemd-resolved.service. Nov 1 04:16:54.992588 systemd[1]: Reached target network.target. Nov 1 04:16:54.992930 systemd[1]: Reached target nss-lookup.target. Nov 1 04:16:54.993312 systemd[1]: Reached target sysinit.target. Nov 1 04:16:54.993740 systemd[1]: Started motdgen.path. Nov 1 04:16:54.994091 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 04:16:54.994647 systemd[1]: Started logrotate.timer. Nov 1 04:16:54.995073 systemd[1]: Started mdadm.timer. Nov 1 04:16:54.995395 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 04:16:54.995754 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 04:16:54.995802 systemd[1]: Reached target paths.target. Nov 1 04:16:54.996123 systemd[1]: Reached target timers.target. Nov 1 04:16:54.996791 systemd[1]: Listening on dbus.socket. Nov 1 04:16:54.998589 systemd[1]: Starting docker.socket... Nov 1 04:16:55.000756 systemd[1]: Listening on sshd.socket. Nov 1 04:16:55.001200 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 04:16:55.001589 systemd[1]: Listening on docker.socket. Nov 1 04:16:55.001974 systemd[1]: Reached target sockets.target. Nov 1 04:16:55.002379 systemd[1]: Reached target basic.target. Nov 1 04:16:55.002875 systemd[1]: System is tainted: cgroupsv1 Nov 1 04:16:55.002961 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 04:16:55.002985 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 04:16:55.004333 systemd[1]: Starting containerd.service... Nov 1 04:16:55.007633 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Nov 1 04:16:55.013317 systemd[1]: Starting dbus.service... Nov 1 04:16:55.015066 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 04:16:55.016934 systemd[1]: Starting extend-filesystems.service... Nov 1 04:16:55.017438 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 04:16:55.019008 systemd[1]: Starting motdgen.service... Nov 1 04:16:55.030459 systemd[1]: Starting prepare-helm.service... Nov 1 04:16:55.033542 jq[1281]: false Nov 1 04:16:55.036116 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 04:16:55.051977 systemd[1]: Starting sshd-keygen.service... Nov 1 04:16:55.055187 systemd[1]: Starting systemd-logind.service... Nov 1 04:16:55.058857 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 04:16:55.058985 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 04:16:55.063960 systemd[1]: Starting update-engine.service... Nov 1 04:16:55.065994 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 04:16:55.070744 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 04:16:55.071033 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 04:16:55.071533 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 04:16:55.071773 systemd[1]: Finished motdgen.service. Nov 1 04:16:55.079063 extend-filesystems[1282]: Found loop1 Nov 1 04:16:55.084420 extend-filesystems[1282]: Found vda Nov 1 04:16:55.084420 extend-filesystems[1282]: Found vda1 Nov 1 04:16:55.084420 extend-filesystems[1282]: Found vda2 Nov 1 04:16:55.084420 extend-filesystems[1282]: Found vda3 Nov 1 04:16:55.084420 extend-filesystems[1282]: Found usr Nov 1 04:16:55.084420 extend-filesystems[1282]: Found vda4 Nov 1 04:16:55.084420 extend-filesystems[1282]: Found vda6 Nov 1 04:16:55.084420 extend-filesystems[1282]: Found vda7 Nov 1 04:16:55.084420 extend-filesystems[1282]: Found vda9 Nov 1 04:16:55.084420 extend-filesystems[1282]: Checking size of /dev/vda9 Nov 1 04:16:55.101444 jq[1301]: true Nov 1 04:16:55.113659 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 04:16:55.113921 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 04:16:55.119492 extend-filesystems[1282]: Resized partition /dev/vda9 Nov 1 04:16:55.127341 jq[1311]: true Nov 1 04:16:55.127739 tar[1305]: linux-amd64/LICENSE Nov 1 04:16:55.127958 tar[1305]: linux-amd64/helm Nov 1 04:16:56.125210 systemd-resolved[1222]: Clock change detected. Flushing caches. Nov 1 04:16:56.125377 systemd-timesyncd[1224]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). Nov 1 04:16:56.125439 systemd-timesyncd[1224]: Initial clock synchronization to Sat 2025-11-01 04:16:56.125160 UTC. Nov 1 04:16:56.137440 extend-filesystems[1319]: resize2fs 1.46.5 (30-Dec-2021) Nov 1 04:16:56.145343 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Nov 1 04:16:56.155946 dbus-daemon[1279]: [system] SELinux support is enabled Nov 1 04:16:56.156638 systemd[1]: Started dbus.service. Nov 1 04:16:56.159248 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 04:16:56.159284 systemd[1]: Reached target system-config.target. Nov 1 04:16:56.159722 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 04:16:56.159741 systemd[1]: Reached target user-config.target. Nov 1 04:16:56.177987 dbus-daemon[1279]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1086 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 1 04:16:56.182097 systemd[1]: Starting systemd-hostnamed.service... Nov 1 04:16:56.184484 update_engine[1300]: I1101 04:16:56.183897 1300 main.cc:92] Flatcar Update Engine starting Nov 1 04:16:56.200302 systemd[1]: Started update-engine.service. Nov 1 04:16:56.202740 systemd[1]: Started locksmithd.service. Nov 1 04:16:56.204514 update_engine[1300]: I1101 04:16:56.204387 1300 update_check_scheduler.cc:74] Next update check in 9m9s Nov 1 04:16:56.224307 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 1 04:16:56.230417 extend-filesystems[1319]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 04:16:56.230417 extend-filesystems[1319]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 1 04:16:56.230417 extend-filesystems[1319]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 1 04:16:56.232225 extend-filesystems[1282]: Resized filesystem in /dev/vda9 Nov 1 04:16:56.233187 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 04:16:56.233458 systemd[1]: Finished extend-filesystems.service. Nov 1 04:16:56.250855 bash[1341]: Updated "/home/core/.ssh/authorized_keys" Nov 1 04:16:56.253183 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 04:16:56.264981 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 04:16:56.265020 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 04:16:56.265974 env[1306]: time="2025-11-01T04:16:56.265917163Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 04:16:56.299037 dbus-daemon[1279]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 1 04:16:56.299201 systemd[1]: Started systemd-hostnamed.service. Nov 1 04:16:56.300171 dbus-daemon[1279]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1330 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 1 04:16:56.302689 systemd[1]: Starting polkit.service... Nov 1 04:16:56.315483 polkitd[1348]: Started polkitd version 121 Nov 1 04:16:56.326646 polkitd[1348]: Loading rules from directory /etc/polkit-1/rules.d Nov 1 04:16:56.326716 polkitd[1348]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 1 04:16:56.327966 polkitd[1348]: Finished loading, compiling and executing 2 rules Nov 1 04:16:56.328404 dbus-daemon[1279]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 1 04:16:56.328558 systemd[1]: Started polkit.service. Nov 1 04:16:56.329242 polkitd[1348]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 1 04:16:56.331432 systemd-logind[1298]: Watching system buttons on /dev/input/event2 (Power Button) Nov 1 04:16:56.337440 systemd-logind[1298]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 04:16:56.342472 systemd-logind[1298]: New seat seat0. Nov 1 04:16:56.344436 systemd-hostnamed[1330]: Hostname set to (static) Nov 1 04:16:56.350904 systemd[1]: Started systemd-logind.service. Nov 1 04:16:56.365222 env[1306]: time="2025-11-01T04:16:56.365169120Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 04:16:56.365373 env[1306]: time="2025-11-01T04:16:56.365355698Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 04:16:56.367076 env[1306]: time="2025-11-01T04:16:56.367042808Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 04:16:56.367076 env[1306]: time="2025-11-01T04:16:56.367074267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 04:16:56.367351 env[1306]: time="2025-11-01T04:16:56.367331895Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 04:16:56.367409 env[1306]: time="2025-11-01T04:16:56.367352975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 04:16:56.367409 env[1306]: time="2025-11-01T04:16:56.367365977Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 04:16:56.367409 env[1306]: time="2025-11-01T04:16:56.367375972Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 04:16:56.367495 env[1306]: time="2025-11-01T04:16:56.367445884Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 04:16:56.367731 env[1306]: time="2025-11-01T04:16:56.367713743Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 04:16:56.367919 env[1306]: time="2025-11-01T04:16:56.367900127Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 04:16:56.367956 env[1306]: time="2025-11-01T04:16:56.367919351Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 04:16:56.367983 env[1306]: time="2025-11-01T04:16:56.367964896Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 04:16:56.367983 env[1306]: time="2025-11-01T04:16:56.367976920Z" level=info msg="metadata content store policy set" policy=shared Nov 1 04:16:56.370612 env[1306]: time="2025-11-01T04:16:56.370584483Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 04:16:56.370612 env[1306]: time="2025-11-01T04:16:56.370615216Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 04:16:56.370716 env[1306]: time="2025-11-01T04:16:56.370630009Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 04:16:56.370716 env[1306]: time="2025-11-01T04:16:56.370672804Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 04:16:56.370716 env[1306]: time="2025-11-01T04:16:56.370690863Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 04:16:56.370716 env[1306]: time="2025-11-01T04:16:56.370708429Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 04:16:56.370828 env[1306]: time="2025-11-01T04:16:56.370722261Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 04:16:56.370828 env[1306]: time="2025-11-01T04:16:56.370735985Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 04:16:56.370828 env[1306]: time="2025-11-01T04:16:56.370749184Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 04:16:56.370828 env[1306]: time="2025-11-01T04:16:56.370764897Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 04:16:56.370828 env[1306]: time="2025-11-01T04:16:56.370781068Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 04:16:56.370828 env[1306]: time="2025-11-01T04:16:56.370797233Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 04:16:56.370984 env[1306]: time="2025-11-01T04:16:56.370907017Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 04:16:56.371012 env[1306]: time="2025-11-01T04:16:56.370980532Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 04:16:56.371360 env[1306]: time="2025-11-01T04:16:56.371340543Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 04:16:56.371400 env[1306]: time="2025-11-01T04:16:56.371378944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 04:16:56.371400 env[1306]: time="2025-11-01T04:16:56.371393229Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 04:16:56.371464 env[1306]: time="2025-11-01T04:16:56.371455936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 04:16:56.371492 env[1306]: time="2025-11-01T04:16:56.371470415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 04:16:56.371492 env[1306]: time="2025-11-01T04:16:56.371483301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 04:16:56.371555 env[1306]: time="2025-11-01T04:16:56.371494337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 04:16:56.371555 env[1306]: time="2025-11-01T04:16:56.371516703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 04:16:56.371555 env[1306]: time="2025-11-01T04:16:56.371532981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 04:16:56.371555 env[1306]: time="2025-11-01T04:16:56.371545620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 04:16:56.371662 env[1306]: time="2025-11-01T04:16:56.371557587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 04:16:56.371662 env[1306]: time="2025-11-01T04:16:56.371571473Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 04:16:56.371728 env[1306]: time="2025-11-01T04:16:56.371706492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 04:16:56.371728 env[1306]: time="2025-11-01T04:16:56.371720485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 04:16:56.371780 env[1306]: time="2025-11-01T04:16:56.371732967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 04:16:56.371780 env[1306]: time="2025-11-01T04:16:56.371744031Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 04:16:56.371780 env[1306]: time="2025-11-01T04:16:56.371758738Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 04:16:56.371780 env[1306]: time="2025-11-01T04:16:56.371768988Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 04:16:56.371877 env[1306]: time="2025-11-01T04:16:56.371793754Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 04:16:56.371877 env[1306]: time="2025-11-01T04:16:56.371834056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 04:16:56.372078 env[1306]: time="2025-11-01T04:16:56.372030080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 04:16:56.373671 env[1306]: time="2025-11-01T04:16:56.372094059Z" level=info msg="Connect containerd service" Nov 1 04:16:56.373671 env[1306]: time="2025-11-01T04:16:56.372153997Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 04:16:56.374294 env[1306]: time="2025-11-01T04:16:56.374270268Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 04:16:56.374725 env[1306]: time="2025-11-01T04:16:56.374702930Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 04:16:56.374775 env[1306]: time="2025-11-01T04:16:56.374751358Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 04:16:56.374917 systemd[1]: Started containerd.service. Nov 1 04:16:56.376262 env[1306]: time="2025-11-01T04:16:56.376244192Z" level=info msg="containerd successfully booted in 0.119201s" Nov 1 04:16:56.377619 env[1306]: time="2025-11-01T04:16:56.377581839Z" level=info msg="Start subscribing containerd event" Nov 1 04:16:56.377667 env[1306]: time="2025-11-01T04:16:56.377649816Z" level=info msg="Start recovering state" Nov 1 04:16:56.377870 env[1306]: time="2025-11-01T04:16:56.377856274Z" level=info msg="Start event monitor" Nov 1 04:16:56.377906 env[1306]: time="2025-11-01T04:16:56.377883537Z" level=info msg="Start snapshots syncer" Nov 1 04:16:56.378003 env[1306]: time="2025-11-01T04:16:56.377990931Z" level=info msg="Start cni network conf syncer for default" Nov 1 04:16:56.378040 env[1306]: time="2025-11-01T04:16:56.378008236Z" level=info msg="Start streaming server" Nov 1 04:16:56.598525 systemd-networkd[1086]: eth0: Gained IPv6LL Nov 1 04:16:56.601019 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 04:16:56.601761 systemd[1]: Reached target network-online.target. Nov 1 04:16:56.604521 systemd[1]: Starting kubelet.service... Nov 1 04:16:56.719741 sshd_keygen[1313]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 04:16:56.789088 systemd[1]: Finished sshd-keygen.service. Nov 1 04:16:56.792849 systemd[1]: Starting issuegen.service... Nov 1 04:16:56.803196 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 04:16:56.803445 systemd[1]: Finished issuegen.service. Nov 1 04:16:56.805928 systemd[1]: Starting systemd-user-sessions.service... Nov 1 04:16:56.813755 systemd[1]: Finished systemd-user-sessions.service. Nov 1 04:16:56.816410 systemd[1]: Started getty@tty1.service. Nov 1 04:16:56.819605 systemd[1]: Started serial-getty@ttyS0.service. Nov 1 04:16:56.820969 systemd[1]: Reached target getty.target. Nov 1 04:16:56.868434 locksmithd[1334]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 04:16:56.942979 tar[1305]: linux-amd64/README.md Nov 1 04:16:56.951918 systemd[1]: Finished prepare-helm.service. Nov 1 04:16:57.736427 systemd[1]: Started kubelet.service. Nov 1 04:16:58.112218 systemd-networkd[1086]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:19a6:24:19ff:fef4:669a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:19a6:24:19ff:fef4:669a/64 assigned by NDisc. Nov 1 04:16:58.112231 systemd-networkd[1086]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 1 04:16:58.152817 systemd[1]: Created slice system-sshd.slice. Nov 1 04:16:58.155884 systemd[1]: Started sshd@0-10.244.102.154:22-139.178.89.65:59716.service. Nov 1 04:16:58.378261 kubelet[1391]: E1101 04:16:58.378103 1391 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 04:16:58.381773 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 04:16:58.381947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 04:16:59.081484 sshd[1398]: Accepted publickey for core from 139.178.89.65 port 59716 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:16:59.085043 sshd[1398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:16:59.103767 systemd[1]: Created slice user-500.slice. Nov 1 04:16:59.105995 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 04:16:59.113733 systemd-logind[1298]: New session 1 of user core. Nov 1 04:16:59.120676 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 04:16:59.122909 systemd[1]: Starting user@500.service... Nov 1 04:16:59.130935 (systemd)[1404]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:16:59.210708 systemd[1404]: Queued start job for default target default.target. Nov 1 04:16:59.211906 systemd[1404]: Reached target paths.target. Nov 1 04:16:59.212075 systemd[1404]: Reached target sockets.target. Nov 1 04:16:59.212191 systemd[1404]: Reached target timers.target. Nov 1 04:16:59.212285 systemd[1404]: Reached target basic.target. Nov 1 04:16:59.212517 systemd[1]: Started user@500.service. Nov 1 04:16:59.215107 systemd[1404]: Reached target default.target. Nov 1 04:16:59.215277 systemd[1404]: Startup finished in 75ms. Nov 1 04:16:59.217763 systemd[1]: Started session-1.scope. Nov 1 04:16:59.853980 systemd[1]: Started sshd@1-10.244.102.154:22-139.178.89.65:59730.service. Nov 1 04:17:00.762776 sshd[1414]: Accepted publickey for core from 139.178.89.65 port 59730 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:17:00.766041 sshd[1414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:17:00.775407 systemd-logind[1298]: New session 2 of user core. Nov 1 04:17:00.776898 systemd[1]: Started session-2.scope. Nov 1 04:17:01.397136 sshd[1414]: pam_unix(sshd:session): session closed for user core Nov 1 04:17:01.403915 systemd[1]: sshd@1-10.244.102.154:22-139.178.89.65:59730.service: Deactivated successfully. Nov 1 04:17:01.405112 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 04:17:01.406228 systemd-logind[1298]: Session 2 logged out. Waiting for processes to exit. Nov 1 04:17:01.407562 systemd-logind[1298]: Removed session 2. Nov 1 04:17:01.544349 systemd[1]: Started sshd@2-10.244.102.154:22-139.178.89.65:59744.service. Nov 1 04:17:02.444600 sshd[1421]: Accepted publickey for core from 139.178.89.65 port 59744 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:17:02.448486 sshd[1421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:17:02.458605 systemd[1]: Started session-3.scope. Nov 1 04:17:02.459297 systemd-logind[1298]: New session 3 of user core. Nov 1 04:17:03.080915 sshd[1421]: pam_unix(sshd:session): session closed for user core Nov 1 04:17:03.088749 systemd[1]: sshd@2-10.244.102.154:22-139.178.89.65:59744.service: Deactivated successfully. Nov 1 04:17:03.090594 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 04:17:03.091616 systemd-logind[1298]: Session 3 logged out. Waiting for processes to exit. Nov 1 04:17:03.092774 systemd-logind[1298]: Removed session 3. Nov 1 04:17:03.170880 coreos-metadata[1277]: Nov 01 04:17:03.170 WARN failed to locate config-drive, using the metadata service API instead Nov 1 04:17:03.222748 coreos-metadata[1277]: Nov 01 04:17:03.222 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Nov 1 04:17:03.248731 coreos-metadata[1277]: Nov 01 04:17:03.248 INFO Fetch successful Nov 1 04:17:03.249258 coreos-metadata[1277]: Nov 01 04:17:03.248 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 1 04:17:03.273588 coreos-metadata[1277]: Nov 01 04:17:03.273 INFO Fetch successful Nov 1 04:17:03.275161 unknown[1277]: wrote ssh authorized keys file for user: core Nov 1 04:17:03.289822 update-ssh-keys[1431]: Updated "/home/core/.ssh/authorized_keys" Nov 1 04:17:03.290891 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Nov 1 04:17:03.291337 systemd[1]: Reached target multi-user.target. Nov 1 04:17:03.293254 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 04:17:03.303892 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 04:17:03.304162 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 04:17:03.304499 systemd[1]: Startup finished in 7.173s (kernel) + 12.364s (userspace) = 19.537s. Nov 1 04:17:08.633766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 04:17:08.634242 systemd[1]: Stopped kubelet.service. Nov 1 04:17:08.638029 systemd[1]: Starting kubelet.service... Nov 1 04:17:08.763201 systemd[1]: Started kubelet.service. Nov 1 04:17:08.816413 kubelet[1444]: E1101 04:17:08.816367 1444 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 04:17:08.819869 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 04:17:08.820037 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 04:17:13.235964 systemd[1]: Started sshd@3-10.244.102.154:22-139.178.89.65:46368.service. Nov 1 04:17:14.145743 sshd[1451]: Accepted publickey for core from 139.178.89.65 port 46368 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:17:14.149957 sshd[1451]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:17:14.160589 systemd[1]: Started session-4.scope. Nov 1 04:17:14.161057 systemd-logind[1298]: New session 4 of user core. Nov 1 04:17:14.782021 sshd[1451]: pam_unix(sshd:session): session closed for user core Nov 1 04:17:14.789556 systemd[1]: sshd@3-10.244.102.154:22-139.178.89.65:46368.service: Deactivated successfully. Nov 1 04:17:14.791154 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 04:17:14.792110 systemd-logind[1298]: Session 4 logged out. Waiting for processes to exit. Nov 1 04:17:14.794599 systemd-logind[1298]: Removed session 4. Nov 1 04:17:14.936816 systemd[1]: Started sshd@4-10.244.102.154:22-139.178.89.65:46374.service. Nov 1 04:17:15.852045 sshd[1458]: Accepted publickey for core from 139.178.89.65 port 46374 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:17:15.856082 sshd[1458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:17:15.864446 systemd-logind[1298]: New session 5 of user core. Nov 1 04:17:15.866943 systemd[1]: Started session-5.scope. Nov 1 04:17:16.493578 sshd[1458]: pam_unix(sshd:session): session closed for user core Nov 1 04:17:16.500243 systemd[1]: sshd@4-10.244.102.154:22-139.178.89.65:46374.service: Deactivated successfully. Nov 1 04:17:16.502352 systemd-logind[1298]: Session 5 logged out. Waiting for processes to exit. Nov 1 04:17:16.502532 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 04:17:16.504971 systemd-logind[1298]: Removed session 5. Nov 1 04:17:16.641623 systemd[1]: Started sshd@5-10.244.102.154:22-139.178.89.65:41906.service. Nov 1 04:17:17.552901 sshd[1465]: Accepted publickey for core from 139.178.89.65 port 41906 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:17:17.557124 sshd[1465]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:17:17.564575 systemd[1]: Started session-6.scope. Nov 1 04:17:17.565416 systemd-logind[1298]: New session 6 of user core. Nov 1 04:17:18.188613 sshd[1465]: pam_unix(sshd:session): session closed for user core Nov 1 04:17:18.193223 systemd[1]: sshd@5-10.244.102.154:22-139.178.89.65:41906.service: Deactivated successfully. Nov 1 04:17:18.194500 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 04:17:18.196389 systemd-logind[1298]: Session 6 logged out. Waiting for processes to exit. Nov 1 04:17:18.198100 systemd-logind[1298]: Removed session 6. Nov 1 04:17:18.338083 systemd[1]: Started sshd@6-10.244.102.154:22-139.178.89.65:41922.service. Nov 1 04:17:19.071953 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 04:17:19.072451 systemd[1]: Stopped kubelet.service. Nov 1 04:17:19.076393 systemd[1]: Starting kubelet.service... Nov 1 04:17:19.200177 systemd[1]: Started kubelet.service. Nov 1 04:17:19.253357 sshd[1472]: Accepted publickey for core from 139.178.89.65 port 41922 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:17:19.255101 sshd[1472]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:17:19.261531 systemd[1]: Started session-7.scope. Nov 1 04:17:19.261865 systemd-logind[1298]: New session 7 of user core. Nov 1 04:17:19.266377 kubelet[1482]: E1101 04:17:19.266116 1482 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 04:17:19.271804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 04:17:19.271953 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 04:17:19.751926 sudo[1491]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 04:17:19.752549 sudo[1491]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 04:17:19.766915 dbus-daemon[1279]: \xd0\u000dL\x88_U: received setenforce notice (enforcing=-1226636800) Nov 1 04:17:19.770040 sudo[1491]: pam_unix(sudo:session): session closed for user root Nov 1 04:17:19.919759 sshd[1472]: pam_unix(sshd:session): session closed for user core Nov 1 04:17:19.925358 systemd-logind[1298]: Session 7 logged out. Waiting for processes to exit. Nov 1 04:17:19.927088 systemd[1]: sshd@6-10.244.102.154:22-139.178.89.65:41922.service: Deactivated successfully. Nov 1 04:17:19.928157 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 04:17:19.929635 systemd-logind[1298]: Removed session 7. Nov 1 04:17:20.077762 systemd[1]: Started sshd@7-10.244.102.154:22-139.178.89.65:41924.service. Nov 1 04:17:20.995486 sshd[1495]: Accepted publickey for core from 139.178.89.65 port 41924 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:17:20.999812 sshd[1495]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:17:21.009071 systemd-logind[1298]: New session 8 of user core. Nov 1 04:17:21.010214 systemd[1]: Started session-8.scope. Nov 1 04:17:21.486258 sudo[1500]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 04:17:21.487017 sudo[1500]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 04:17:21.492396 sudo[1500]: pam_unix(sudo:session): session closed for user root Nov 1 04:17:21.500705 sudo[1499]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 04:17:21.501101 sudo[1499]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 04:17:21.513694 systemd[1]: Stopping audit-rules.service... Nov 1 04:17:21.515000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Nov 1 04:17:21.518481 auditctl[1503]: No rules Nov 1 04:17:21.519407 kernel: kauditd_printk_skb: 19 callbacks suppressed Nov 1 04:17:21.519481 kernel: audit: type=1305 audit(1761970641.515:160): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Nov 1 04:17:21.519872 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 04:17:21.520163 systemd[1]: Stopped audit-rules.service. Nov 1 04:17:21.522905 systemd[1]: Starting audit-rules.service... Nov 1 04:17:21.515000 audit[1503]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffed7422860 a2=420 a3=0 items=0 ppid=1 pid=1503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:21.528417 kernel: audit: type=1300 audit(1761970641.515:160): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffed7422860 a2=420 a3=0 items=0 ppid=1 pid=1503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:21.515000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Nov 1 04:17:21.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:21.534426 kernel: audit: type=1327 audit(1761970641.515:160): proctitle=2F7362696E2F617564697463746C002D44 Nov 1 04:17:21.534484 kernel: audit: type=1131 audit(1761970641.519:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:21.555673 augenrules[1521]: No rules Nov 1 04:17:21.563401 kernel: audit: type=1130 audit(1761970641.556:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:21.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:21.558282 sudo[1499]: pam_unix(sudo:session): session closed for user root Nov 1 04:17:21.556909 systemd[1]: Finished audit-rules.service. Nov 1 04:17:21.557000 audit[1499]: USER_END pid=1499 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 04:17:21.557000 audit[1499]: CRED_DISP pid=1499 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 04:17:21.570350 kernel: audit: type=1106 audit(1761970641.557:163): pid=1499 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 04:17:21.570435 kernel: audit: type=1104 audit(1761970641.557:164): pid=1499 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 04:17:21.705304 sshd[1495]: pam_unix(sshd:session): session closed for user core Nov 1 04:17:21.707000 audit[1495]: USER_END pid=1495 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:17:21.710585 systemd[1]: sshd@7-10.244.102.154:22-139.178.89.65:41924.service: Deactivated successfully. Nov 1 04:17:21.712028 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 04:17:21.707000 audit[1495]: CRED_DISP pid=1495 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:17:21.718771 kernel: audit: type=1106 audit(1761970641.707:165): pid=1495 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:17:21.718877 kernel: audit: type=1104 audit(1761970641.707:166): pid=1495 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:17:21.718805 systemd-logind[1298]: Session 8 logged out. Waiting for processes to exit. Nov 1 04:17:21.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.244.102.154:22-139.178.89.65:41924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:21.722554 kernel: audit: type=1131 audit(1761970641.707:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.244.102.154:22-139.178.89.65:41924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:21.723027 systemd-logind[1298]: Removed session 8. Nov 1 04:17:21.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.244.102.154:22-139.178.89.65:41926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:21.856426 systemd[1]: Started sshd@8-10.244.102.154:22-139.178.89.65:41926.service. Nov 1 04:17:22.765000 audit[1528]: USER_ACCT pid=1528 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:17:22.766207 sshd[1528]: Accepted publickey for core from 139.178.89.65 port 41926 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:17:22.767000 audit[1528]: CRED_ACQ pid=1528 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:17:22.767000 audit[1528]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1964eb10 a2=3 a3=0 items=0 ppid=1 pid=1528 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:22.767000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 04:17:22.769415 sshd[1528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:17:22.776403 systemd-logind[1298]: New session 9 of user core. Nov 1 04:17:22.776985 systemd[1]: Started session-9.scope. Nov 1 04:17:22.782000 audit[1528]: USER_START pid=1528 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:17:22.784000 audit[1531]: CRED_ACQ pid=1531 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:17:23.253000 audit[1532]: USER_ACCT pid=1532 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 04:17:23.254164 sudo[1532]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 04:17:23.254000 audit[1532]: CRED_REFR pid=1532 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 04:17:23.254904 sudo[1532]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 04:17:23.257000 audit[1532]: USER_START pid=1532 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 04:17:23.297688 systemd[1]: Starting docker.service... Nov 1 04:17:23.362938 env[1542]: time="2025-11-01T04:17:23.362810042Z" level=info msg="Starting up" Nov 1 04:17:23.366779 env[1542]: time="2025-11-01T04:17:23.366727991Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 04:17:23.366779 env[1542]: time="2025-11-01T04:17:23.366750218Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 04:17:23.366779 env[1542]: time="2025-11-01T04:17:23.366784351Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 04:17:23.367163 env[1542]: time="2025-11-01T04:17:23.366799671Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 04:17:23.372801 env[1542]: time="2025-11-01T04:17:23.372761315Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 04:17:23.372801 env[1542]: time="2025-11-01T04:17:23.372787627Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 04:17:23.372801 env[1542]: time="2025-11-01T04:17:23.372802895Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 04:17:23.372984 env[1542]: time="2025-11-01T04:17:23.372812449Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 04:17:23.387795 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport861499756-merged.mount: Deactivated successfully. Nov 1 04:17:23.418236 env[1542]: time="2025-11-01T04:17:23.418168939Z" level=warning msg="Your kernel does not support cgroup blkio weight" Nov 1 04:17:23.418236 env[1542]: time="2025-11-01T04:17:23.418199768Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Nov 1 04:17:23.418756 env[1542]: time="2025-11-01T04:17:23.418456425Z" level=info msg="Loading containers: start." Nov 1 04:17:23.504000 audit[1575]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.504000 audit[1575]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffb5228130 a2=0 a3=7fffb522811c items=0 ppid=1542 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.504000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Nov 1 04:17:23.507000 audit[1577]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.507000 audit[1577]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc49159570 a2=0 a3=7ffc4915955c items=0 ppid=1542 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.507000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Nov 1 04:17:23.509000 audit[1579]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.509000 audit[1579]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe3aad5990 a2=0 a3=7ffe3aad597c items=0 ppid=1542 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.509000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Nov 1 04:17:23.512000 audit[1581]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.512000 audit[1581]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe96d0afa0 a2=0 a3=7ffe96d0af8c items=0 ppid=1542 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.512000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Nov 1 04:17:23.516000 audit[1583]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1583 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.516000 audit[1583]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff033fd0a0 a2=0 a3=7fff033fd08c items=0 ppid=1542 pid=1583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.516000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Nov 1 04:17:23.545000 audit[1588]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1588 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.545000 audit[1588]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe257fe080 a2=0 a3=7ffe257fe06c items=0 ppid=1542 pid=1588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.545000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Nov 1 04:17:23.556000 audit[1590]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.556000 audit[1590]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc5e4cd060 a2=0 a3=7ffc5e4cd04c items=0 ppid=1542 pid=1590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.556000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Nov 1 04:17:23.561000 audit[1592]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.561000 audit[1592]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc2d3ed330 a2=0 a3=7ffc2d3ed31c items=0 ppid=1542 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.561000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Nov 1 04:17:23.564000 audit[1594]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.564000 audit[1594]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffc07d08910 a2=0 a3=7ffc07d088fc items=0 ppid=1542 pid=1594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.564000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 04:17:23.572000 audit[1598]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.572000 audit[1598]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffca1198350 a2=0 a3=7ffca119833c items=0 ppid=1542 pid=1598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.572000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Nov 1 04:17:23.578000 audit[1599]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.578000 audit[1599]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd869f7d50 a2=0 a3=7ffd869f7d3c items=0 ppid=1542 pid=1599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.578000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 04:17:23.595440 kernel: Initializing XFRM netlink socket Nov 1 04:17:23.636654 env[1542]: time="2025-11-01T04:17:23.636542193Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 04:17:23.675000 audit[1607]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1607 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.675000 audit[1607]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffc20aa3540 a2=0 a3=7ffc20aa352c items=0 ppid=1542 pid=1607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.675000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Nov 1 04:17:23.689000 audit[1610]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1610 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.689000 audit[1610]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff227a3680 a2=0 a3=7fff227a366c items=0 ppid=1542 pid=1610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.689000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Nov 1 04:17:23.693000 audit[1613]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1613 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.693000 audit[1613]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffeea068bb0 a2=0 a3=7ffeea068b9c items=0 ppid=1542 pid=1613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.693000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Nov 1 04:17:23.696000 audit[1615]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1615 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.696000 audit[1615]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffcc6260a00 a2=0 a3=7ffcc62609ec items=0 ppid=1542 pid=1615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.696000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Nov 1 04:17:23.699000 audit[1617]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1617 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.699000 audit[1617]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffd0bab0d60 a2=0 a3=7ffd0bab0d4c items=0 ppid=1542 pid=1617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.699000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Nov 1 04:17:23.702000 audit[1619]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1619 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.702000 audit[1619]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffd02b728a0 a2=0 a3=7ffd02b7288c items=0 ppid=1542 pid=1619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.702000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Nov 1 04:17:23.708000 audit[1621]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1621 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.708000 audit[1621]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffe6a2003d0 a2=0 a3=7ffe6a2003bc items=0 ppid=1542 pid=1621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.708000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Nov 1 04:17:23.727000 audit[1624]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1624 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.727000 audit[1624]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7fff0a669b90 a2=0 a3=7fff0a669b7c items=0 ppid=1542 pid=1624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.727000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Nov 1 04:17:23.731000 audit[1626]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1626 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.731000 audit[1626]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffeaf91c1f0 a2=0 a3=7ffeaf91c1dc items=0 ppid=1542 pid=1626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.731000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Nov 1 04:17:23.736000 audit[1628]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1628 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.736000 audit[1628]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff099da4a0 a2=0 a3=7fff099da48c items=0 ppid=1542 pid=1628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.736000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Nov 1 04:17:23.741000 audit[1630]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1630 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.741000 audit[1630]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffe46ba330 a2=0 a3=7fffe46ba31c items=0 ppid=1542 pid=1630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.741000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Nov 1 04:17:23.743306 systemd-networkd[1086]: docker0: Link UP Nov 1 04:17:23.755000 audit[1634]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1634 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.755000 audit[1634]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdf8962bf0 a2=0 a3=7ffdf8962bdc items=0 ppid=1542 pid=1634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.755000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Nov 1 04:17:23.763000 audit[1635]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1635 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:23.763000 audit[1635]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd6e2fc860 a2=0 a3=7ffd6e2fc84c items=0 ppid=1542 pid=1635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:23.763000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Nov 1 04:17:23.764159 env[1542]: time="2025-11-01T04:17:23.764112578Z" level=info msg="Loading containers: done." Nov 1 04:17:23.789000 env[1542]: time="2025-11-01T04:17:23.788940439Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 04:17:23.789270 env[1542]: time="2025-11-01T04:17:23.789239269Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 04:17:23.789426 env[1542]: time="2025-11-01T04:17:23.789403331Z" level=info msg="Daemon has completed initialization" Nov 1 04:17:23.802113 systemd[1]: Started docker.service. Nov 1 04:17:23.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:23.814091 env[1542]: time="2025-11-01T04:17:23.814036193Z" level=info msg="API listen on /run/docker.sock" Nov 1 04:17:24.973494 env[1306]: time="2025-11-01T04:17:24.973389059Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 04:17:25.832577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2853848836.mount: Deactivated successfully. Nov 1 04:17:28.067276 env[1306]: time="2025-11-01T04:17:28.067087810Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:28.069234 env[1306]: time="2025-11-01T04:17:28.069179087Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:28.071609 env[1306]: time="2025-11-01T04:17:28.071369177Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:28.073416 env[1306]: time="2025-11-01T04:17:28.073362537Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:28.074271 env[1306]: time="2025-11-01T04:17:28.074211272Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 04:17:28.077306 env[1306]: time="2025-11-01T04:17:28.077242471Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 04:17:28.159548 kernel: kauditd_printk_skb: 84 callbacks suppressed Nov 1 04:17:28.159681 kernel: audit: type=1131 audit(1761970648.154:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:28.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:28.154048 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 1 04:17:29.536367 kernel: audit: type=1130 audit(1761970649.523:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:29.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:29.523718 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 1 04:17:29.524195 systemd[1]: Stopped kubelet.service. Nov 1 04:17:29.536166 systemd[1]: Starting kubelet.service... Nov 1 04:17:29.541629 kernel: audit: type=1131 audit(1761970649.524:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:29.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:29.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:29.688232 systemd[1]: Started kubelet.service. Nov 1 04:17:29.692334 kernel: audit: type=1130 audit(1761970649.688:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:29.763867 kubelet[1681]: E1101 04:17:29.763812 1681 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 04:17:29.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 04:17:29.771377 kernel: audit: type=1131 audit(1761970649.765:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 04:17:29.765756 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 04:17:29.765923 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 04:17:30.135266 env[1306]: time="2025-11-01T04:17:30.135221412Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:30.137642 env[1306]: time="2025-11-01T04:17:30.137613495Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:30.140390 env[1306]: time="2025-11-01T04:17:30.140359873Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:30.143303 env[1306]: time="2025-11-01T04:17:30.143279498Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:30.145443 env[1306]: time="2025-11-01T04:17:30.145387861Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 04:17:30.146775 env[1306]: time="2025-11-01T04:17:30.146752927Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 04:17:32.248281 env[1306]: time="2025-11-01T04:17:32.248110980Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:32.251058 env[1306]: time="2025-11-01T04:17:32.250985296Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:32.254784 env[1306]: time="2025-11-01T04:17:32.254723430Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:32.260190 env[1306]: time="2025-11-01T04:17:32.260156497Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:32.260763 env[1306]: time="2025-11-01T04:17:32.260735285Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 04:17:32.263721 env[1306]: time="2025-11-01T04:17:32.263646298Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 04:17:33.755561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount470501969.mount: Deactivated successfully. Nov 1 04:17:34.807346 env[1306]: time="2025-11-01T04:17:34.807240031Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:34.809188 env[1306]: time="2025-11-01T04:17:34.809145798Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:34.809627 env[1306]: time="2025-11-01T04:17:34.809601333Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:34.811460 env[1306]: time="2025-11-01T04:17:34.811413774Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:34.812043 env[1306]: time="2025-11-01T04:17:34.812007854Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 04:17:34.812911 env[1306]: time="2025-11-01T04:17:34.812801665Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 04:17:35.536459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount113380082.mount: Deactivated successfully. Nov 1 04:17:36.947147 env[1306]: time="2025-11-01T04:17:36.947063901Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:36.951229 env[1306]: time="2025-11-01T04:17:36.951163055Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:36.953504 env[1306]: time="2025-11-01T04:17:36.953462677Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:36.956276 env[1306]: time="2025-11-01T04:17:36.956211075Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 04:17:36.957490 env[1306]: time="2025-11-01T04:17:36.957401498Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 04:17:36.958080 env[1306]: time="2025-11-01T04:17:36.957393340Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:37.626110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2233361335.mount: Deactivated successfully. Nov 1 04:17:37.631737 env[1306]: time="2025-11-01T04:17:37.631665746Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:37.634082 env[1306]: time="2025-11-01T04:17:37.634018105Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:37.638037 env[1306]: time="2025-11-01T04:17:37.637999199Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:37.639759 env[1306]: time="2025-11-01T04:17:37.639724731Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:37.640427 env[1306]: time="2025-11-01T04:17:37.640394191Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 04:17:37.641038 env[1306]: time="2025-11-01T04:17:37.641009379Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 04:17:38.503550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3662049377.mount: Deactivated successfully. Nov 1 04:17:39.890365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 1 04:17:39.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:39.905667 kernel: audit: type=1130 audit(1761970659.889:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:39.890916 systemd[1]: Stopped kubelet.service. Nov 1 04:17:39.905733 systemd[1]: Starting kubelet.service... Nov 1 04:17:39.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:39.914709 kernel: audit: type=1131 audit(1761970659.900:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:40.059380 systemd[1]: Started kubelet.service. Nov 1 04:17:40.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:40.066651 kernel: audit: type=1130 audit(1761970660.058:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:40.145168 kubelet[1696]: E1101 04:17:40.144445 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 04:17:40.147214 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 04:17:40.147396 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 04:17:40.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 04:17:40.151341 kernel: audit: type=1131 audit(1761970660.146:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 04:17:41.297774 env[1306]: time="2025-11-01T04:17:41.297566384Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:41.300290 env[1306]: time="2025-11-01T04:17:41.300241164Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:41.302583 env[1306]: time="2025-11-01T04:17:41.302554766Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:41.306035 env[1306]: time="2025-11-01T04:17:41.305980939Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:41.307465 env[1306]: time="2025-11-01T04:17:41.307405590Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 04:17:41.781650 update_engine[1300]: I1101 04:17:41.781420 1300 update_attempter.cc:509] Updating boot flags... Nov 1 04:17:44.041432 systemd[1]: Stopped kubelet.service. Nov 1 04:17:44.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:44.045826 systemd[1]: Starting kubelet.service... Nov 1 04:17:44.049344 kernel: audit: type=1130 audit(1761970664.040:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:44.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:44.054345 kernel: audit: type=1131 audit(1761970664.040:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:44.088438 systemd[1]: Reloading. Nov 1 04:17:44.206284 /usr/lib/systemd/system-generators/torcx-generator[1762]: time="2025-11-01T04:17:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 04:17:44.206746 /usr/lib/systemd/system-generators/torcx-generator[1762]: time="2025-11-01T04:17:44Z" level=info msg="torcx already run" Nov 1 04:17:44.297399 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 04:17:44.298068 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 04:17:44.319379 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 04:17:44.426069 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 04:17:44.426703 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 04:17:44.427233 systemd[1]: Stopped kubelet.service. Nov 1 04:17:44.432390 kernel: audit: type=1130 audit(1761970664.426:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 04:17:44.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 1 04:17:44.437486 systemd[1]: Starting kubelet.service... Nov 1 04:17:44.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:44.551728 systemd[1]: Started kubelet.service. Nov 1 04:17:44.558544 kernel: audit: type=1130 audit(1761970664.551:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:44.613584 kubelet[1824]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 04:17:44.613584 kubelet[1824]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 04:17:44.613584 kubelet[1824]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 04:17:44.614407 kubelet[1824]: I1101 04:17:44.613705 1824 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 04:17:44.973833 kubelet[1824]: I1101 04:17:44.973791 1824 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 04:17:44.974030 kubelet[1824]: I1101 04:17:44.974017 1824 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 04:17:44.974420 kubelet[1824]: I1101 04:17:44.974405 1824 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 04:17:45.013690 kubelet[1824]: I1101 04:17:45.013651 1824 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 04:17:45.016359 kubelet[1824]: E1101 04:17:45.016096 1824 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.102.154:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.102.154:6443: connect: connection refused" logger="UnhandledError" Nov 1 04:17:45.023280 kubelet[1824]: E1101 04:17:45.023236 1824 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 04:17:45.023280 kubelet[1824]: I1101 04:17:45.023275 1824 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 04:17:45.027720 kubelet[1824]: I1101 04:17:45.027671 1824 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 04:17:45.030571 kubelet[1824]: I1101 04:17:45.030526 1824 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 04:17:45.030806 kubelet[1824]: I1101 04:17:45.030569 1824 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-i9e8z.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 04:17:45.030806 kubelet[1824]: I1101 04:17:45.030804 1824 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 04:17:45.030806 kubelet[1824]: I1101 04:17:45.030817 1824 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 04:17:45.031257 kubelet[1824]: I1101 04:17:45.030990 1824 state_mem.go:36] "Initialized new in-memory state store" Nov 1 04:17:45.035335 kubelet[1824]: I1101 04:17:45.035290 1824 kubelet.go:446] "Attempting to sync node with API server" Nov 1 04:17:45.039456 kubelet[1824]: W1101 04:17:45.039392 1824 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.102.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-i9e8z.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.102.154:6443: connect: connection refused Nov 1 04:17:45.039561 kubelet[1824]: E1101 04:17:45.039479 1824 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.102.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-i9e8z.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.102.154:6443: connect: connection refused" logger="UnhandledError" Nov 1 04:17:45.041939 kubelet[1824]: I1101 04:17:45.041907 1824 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 04:17:45.042040 kubelet[1824]: I1101 04:17:45.041969 1824 kubelet.go:352] "Adding apiserver pod source" Nov 1 04:17:45.042040 kubelet[1824]: I1101 04:17:45.041988 1824 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 04:17:45.054469 kubelet[1824]: I1101 04:17:45.054441 1824 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 04:17:45.055238 kubelet[1824]: I1101 04:17:45.055214 1824 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 04:17:45.055617 kubelet[1824]: W1101 04:17:45.055542 1824 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.102.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.102.154:6443: connect: connection refused Nov 1 04:17:45.055715 kubelet[1824]: E1101 04:17:45.055621 1824 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.102.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.102.154:6443: connect: connection refused" logger="UnhandledError" Nov 1 04:17:45.056286 kubelet[1824]: W1101 04:17:45.056270 1824 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 04:17:45.058970 kubelet[1824]: I1101 04:17:45.058947 1824 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 04:17:45.059142 kubelet[1824]: I1101 04:17:45.059128 1824 server.go:1287] "Started kubelet" Nov 1 04:17:45.066110 kubelet[1824]: I1101 04:17:45.064938 1824 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 04:17:45.068429 kubelet[1824]: I1101 04:17:45.067314 1824 server.go:479] "Adding debug handlers to kubelet server" Nov 1 04:17:45.078665 kubelet[1824]: E1101 04:17:45.077081 1824 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.102.154:6443/api/v1/namespaces/default/events\": dial tcp 10.244.102.154:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-i9e8z.gb1.brightbox.com.1873c6fc0b819f17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-i9e8z.gb1.brightbox.com,UID:srv-i9e8z.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-i9e8z.gb1.brightbox.com,},FirstTimestamp:2025-11-01 04:17:45.059098391 +0000 UTC m=+0.494551783,LastTimestamp:2025-11-01 04:17:45.059098391 +0000 UTC m=+0.494551783,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-i9e8z.gb1.brightbox.com,}" Nov 1 04:17:45.079003 kubelet[1824]: I1101 04:17:45.078669 1824 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 04:17:45.079129 kubelet[1824]: I1101 04:17:45.079037 1824 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 04:17:45.082000 audit[1824]: AVC avc: denied { mac_admin } for pid=1824 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:17:45.085218 kubelet[1824]: I1101 04:17:45.085191 1824 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Nov 1 04:17:45.085368 kubelet[1824]: I1101 04:17:45.085352 1824 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Nov 1 04:17:45.085568 kubelet[1824]: I1101 04:17:45.085554 1824 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 04:17:45.088345 kernel: audit: type=1400 audit(1761970665.082:215): avc: denied { mac_admin } for pid=1824 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:17:45.088432 kernel: audit: type=1401 audit(1761970665.082:215): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 04:17:45.082000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 04:17:45.082000 audit[1824]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c4f020 a1=c000a697d0 a2=c000c4eff0 a3=25 items=0 ppid=1 pid=1824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:45.094005 kernel: audit: type=1300 audit(1761970665.082:215): arch=c000003e syscall=188 success=no exit=-22 a0=c000c4f020 a1=c000a697d0 a2=c000c4eff0 a3=25 items=0 ppid=1 pid=1824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:45.094083 kernel: audit: type=1327 audit(1761970665.082:215): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 04:17:45.082000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 04:17:45.094488 kubelet[1824]: I1101 04:17:45.094468 1824 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 04:17:45.096747 kubelet[1824]: I1101 04:17:45.096732 1824 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 04:17:45.097117 kubelet[1824]: E1101 04:17:45.097096 1824 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-i9e8z.gb1.brightbox.com\" not found" Nov 1 04:17:45.084000 audit[1824]: AVC avc: denied { mac_admin } for pid=1824 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:17:45.100255 kubelet[1824]: I1101 04:17:45.100241 1824 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 04:17:45.100418 kubelet[1824]: I1101 04:17:45.100407 1824 reconciler.go:26] "Reconciler: start to sync state" Nov 1 04:17:45.102398 kernel: audit: type=1400 audit(1761970665.084:216): avc: denied { mac_admin } for pid=1824 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:17:45.102472 kubelet[1824]: W1101 04:17:45.102347 1824 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.102.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.102.154:6443: connect: connection refused Nov 1 04:17:45.102472 kubelet[1824]: E1101 04:17:45.102398 1824 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.102.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.102.154:6443: connect: connection refused" logger="UnhandledError" Nov 1 04:17:45.102579 kubelet[1824]: E1101 04:17:45.102561 1824 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 04:17:45.102703 kubelet[1824]: E1101 04:17:45.102675 1824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.102.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-i9e8z.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.102.154:6443: connect: connection refused" interval="200ms" Nov 1 04:17:45.102849 kubelet[1824]: I1101 04:17:45.102835 1824 factory.go:221] Registration of the systemd container factory successfully Nov 1 04:17:45.102939 kubelet[1824]: I1101 04:17:45.102924 1824 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 04:17:45.084000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 04:17:45.106035 kubelet[1824]: I1101 04:17:45.105811 1824 factory.go:221] Registration of the containerd container factory successfully Nov 1 04:17:45.110271 kernel: audit: type=1401 audit(1761970665.084:216): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 04:17:45.110334 kernel: audit: type=1300 audit(1761970665.084:216): arch=c000003e syscall=188 success=no exit=-22 a0=c000cae080 a1=c000a697e8 a2=c000c4f0b0 a3=25 items=0 ppid=1 pid=1824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:45.084000 audit[1824]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000cae080 a1=c000a697e8 a2=c000c4f0b0 a3=25 items=0 ppid=1 pid=1824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:45.084000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 04:17:45.114325 kernel: audit: type=1327 audit(1761970665.084:216): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 04:17:45.090000 audit[1835]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1835 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:45.117338 kernel: audit: type=1325 audit(1761970665.090:217): table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1835 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:45.090000 audit[1835]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcba977b10 a2=0 a3=7ffcba977afc items=0 ppid=1824 pid=1835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:45.090000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Nov 1 04:17:45.091000 audit[1836]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1836 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:45.091000 audit[1836]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff3423bf70 a2=0 a3=7fff3423bf5c items=0 ppid=1824 pid=1836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:45.091000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Nov 1 04:17:45.097000 audit[1838]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1838 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:45.122365 kernel: audit: type=1300 audit(1761970665.090:217): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcba977b10 a2=0 a3=7ffcba977afc items=0 ppid=1824 pid=1835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:45.097000 audit[1838]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc4bccea70 a2=0 a3=7ffc4bccea5c items=0 ppid=1824 pid=1838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:45.097000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 04:17:45.101000 audit[1840]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1840 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:45.101000 audit[1840]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe0a4b5c80 a2=0 a3=7ffe0a4b5c6c items=0 ppid=1824 pid=1840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:45.101000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 04:17:45.147619 kubelet[1824]: I1101 04:17:45.147563 1824 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 04:17:45.147619 kubelet[1824]: I1101 04:17:45.147588 1824 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 04:17:45.147619 kubelet[1824]: I1101 04:17:45.147618 1824 state_mem.go:36] "Initialized new in-memory state store" Nov 1 04:17:45.148904 kubelet[1824]: I1101 04:17:45.148873 1824 policy_none.go:49] "None policy: Start" Nov 1 04:17:45.148904 kubelet[1824]: I1101 04:17:45.148901 1824 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 04:17:45.149196 kubelet[1824]: I1101 04:17:45.148920 1824 state_mem.go:35] "Initializing new in-memory state store" Nov 1 04:17:45.150000 audit[1849]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1849 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:45.159447 kubelet[1824]: I1101 04:17:45.159314 1824 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 04:17:45.158000 audit[1824]: AVC avc: denied { mac_admin } for pid=1824 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:17:45.158000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 04:17:45.158000 audit[1824]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000f2de60 a1=c000f3c150 a2=c000f2de30 a3=25 items=0 ppid=1 pid=1824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:45.158000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 04:17:45.159888 kubelet[1824]: I1101 04:17:45.159869 1824 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Nov 1 04:17:45.160091 kubelet[1824]: I1101 04:17:45.160067 1824 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 04:17:45.160223 kubelet[1824]: I1101 04:17:45.160178 1824 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 04:17:45.160711 kubelet[1824]: I1101 04:17:45.160695 1824 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 04:17:45.161341 kubelet[1824]: E1101 04:17:45.161310 1824 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 04:17:45.161412 kubelet[1824]: E1101 04:17:45.161376 1824 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-i9e8z.gb1.brightbox.com\" not found" Nov 1 04:17:45.150000 audit[1849]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc2dd8e250 a2=0 a3=7ffc2dd8e23c items=0 ppid=1824 pid=1849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:45.150000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Nov 1 04:17:45.161930 kubelet[1824]: I1101 04:17:45.161905 1824 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 04:17:45.161000 audit[1851]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1851 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:45.161000 audit[1851]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd351e9f50 a2=0 a3=7ffd351e9f3c items=0 ppid=1824 pid=1851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:45.161000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Nov 1 04:17:45.163409 kubelet[1824]: I1101 04:17:45.163393 1824 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 04:17:45.163530 kubelet[1824]: I1101 04:17:45.163516 1824 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 04:17:45.163653 kubelet[1824]: I1101 04:17:45.163640 1824 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 04:17:45.163723 kubelet[1824]: I1101 04:17:45.163714 1824 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 04:17:45.163850 kubelet[1824]: E1101 04:17:45.163836 1824 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Nov 1 04:17:45.162000 audit[1852]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1852 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:45.162000 audit[1852]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeadd8b0e0 a2=0 a3=7ffeadd8b0cc items=0 ppid=1824 pid=1852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:45.162000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Nov 1 04:17:45.163000 audit[1853]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1853 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:45.163000 audit[1853]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff2063faf0 a2=0 a3=7fff2063fadc items=0 ppid=1824 pid=1853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:45.163000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Nov 1 04:17:45.164000 audit[1854]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1854 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:45.164000 audit[1854]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4e6a28d0 a2=0 a3=7fff4e6a28bc items=0 ppid=1824 pid=1854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:45.164000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Nov 1 04:17:45.164000 audit[1855]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1855 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:45.164000 audit[1855]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fff4930e5a0 a2=0 a3=10e3 items=0 ppid=1824 pid=1855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:45.164000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Nov 1 04:17:45.166000 audit[1856]: NETFILTER_CFG table=filter:36 family=10 entries=2 op=nft_register_chain pid=1856 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:45.166000 audit[1856]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffca33a0580 a2=0 a3=7ffca33a056c items=0 ppid=1824 pid=1856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:45.166000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Nov 1 04:17:45.167000 audit[1857]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_chain pid=1857 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:45.167000 audit[1857]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffededc130 a2=0 a3=7fffededc11c items=0 ppid=1824 pid=1857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:45.167000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Nov 1 04:17:45.169429 kubelet[1824]: W1101 04:17:45.169412 1824 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.102.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.102.154:6443: connect: connection refused Nov 1 04:17:45.169570 kubelet[1824]: E1101 04:17:45.169529 1824 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.102.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.102.154:6443: connect: connection refused" logger="UnhandledError" Nov 1 04:17:45.265944 kubelet[1824]: I1101 04:17:45.264981 1824 kubelet_node_status.go:75] "Attempting to register node" node="srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:45.268824 kubelet[1824]: E1101 04:17:45.268768 1824 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.102.154:6443/api/v1/nodes\": dial tcp 10.244.102.154:6443: connect: connection refused" node="srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:45.278234 kubelet[1824]: E1101 04:17:45.278208 1824 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i9e8z.gb1.brightbox.com\" not found" node="srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:45.283111 kubelet[1824]: E1101 04:17:45.283078 1824 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i9e8z.gb1.brightbox.com\" not found" node="srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:45.284540 kubelet[1824]: E1101 04:17:45.284519 1824 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i9e8z.gb1.brightbox.com\" not found" node="srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:45.303850 kubelet[1824]: E1101 04:17:45.303813 1824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.102.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-i9e8z.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.102.154:6443: connect: connection refused" interval="400ms" Nov 1 04:17:45.401990 kubelet[1824]: I1101 04:17:45.401917 1824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e855836868a1b09c1ca3f838739f4c5-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-i9e8z.gb1.brightbox.com\" (UID: \"1e855836868a1b09c1ca3f838739f4c5\") " pod="kube-system/kube-controller-manager-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:45.402241 kubelet[1824]: I1101 04:17:45.402014 1824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae44e84903fd4ad1be4aa9a8ec9d7734-kubeconfig\") pod \"kube-scheduler-srv-i9e8z.gb1.brightbox.com\" (UID: \"ae44e84903fd4ad1be4aa9a8ec9d7734\") " pod="kube-system/kube-scheduler-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:45.402241 kubelet[1824]: I1101 04:17:45.402066 1824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/67328da50b364fe3e739ac52a4c79e6e-ca-certs\") pod \"kube-apiserver-srv-i9e8z.gb1.brightbox.com\" (UID: \"67328da50b364fe3e739ac52a4c79e6e\") " pod="kube-system/kube-apiserver-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:45.402241 kubelet[1824]: I1101 04:17:45.402113 1824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1e855836868a1b09c1ca3f838739f4c5-flexvolume-dir\") pod \"kube-controller-manager-srv-i9e8z.gb1.brightbox.com\" (UID: \"1e855836868a1b09c1ca3f838739f4c5\") " pod="kube-system/kube-controller-manager-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:45.402241 kubelet[1824]: I1101 04:17:45.402154 1824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e855836868a1b09c1ca3f838739f4c5-k8s-certs\") pod \"kube-controller-manager-srv-i9e8z.gb1.brightbox.com\" (UID: \"1e855836868a1b09c1ca3f838739f4c5\") " pod="kube-system/kube-controller-manager-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:45.402241 kubelet[1824]: I1101 04:17:45.402193 1824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/67328da50b364fe3e739ac52a4c79e6e-k8s-certs\") pod \"kube-apiserver-srv-i9e8z.gb1.brightbox.com\" (UID: \"67328da50b364fe3e739ac52a4c79e6e\") " pod="kube-system/kube-apiserver-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:45.402530 kubelet[1824]: I1101 04:17:45.402238 1824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/67328da50b364fe3e739ac52a4c79e6e-usr-share-ca-certificates\") pod \"kube-apiserver-srv-i9e8z.gb1.brightbox.com\" (UID: \"67328da50b364fe3e739ac52a4c79e6e\") " pod="kube-system/kube-apiserver-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:45.402530 kubelet[1824]: I1101 04:17:45.402278 1824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e855836868a1b09c1ca3f838739f4c5-ca-certs\") pod \"kube-controller-manager-srv-i9e8z.gb1.brightbox.com\" (UID: \"1e855836868a1b09c1ca3f838739f4c5\") " pod="kube-system/kube-controller-manager-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:45.402530 kubelet[1824]: I1101 04:17:45.402365 1824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1e855836868a1b09c1ca3f838739f4c5-kubeconfig\") pod \"kube-controller-manager-srv-i9e8z.gb1.brightbox.com\" (UID: \"1e855836868a1b09c1ca3f838739f4c5\") " pod="kube-system/kube-controller-manager-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:45.473257 kubelet[1824]: I1101 04:17:45.473209 1824 kubelet_node_status.go:75] "Attempting to register node" node="srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:45.474183 kubelet[1824]: E1101 04:17:45.474131 1824 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.102.154:6443/api/v1/nodes\": dial tcp 10.244.102.154:6443: connect: connection refused" node="srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:45.577087 kubelet[1824]: E1101 04:17:45.576655 1824 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.102.154:6443/api/v1/namespaces/default/events\": dial tcp 10.244.102.154:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-i9e8z.gb1.brightbox.com.1873c6fc0b819f17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-i9e8z.gb1.brightbox.com,UID:srv-i9e8z.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-i9e8z.gb1.brightbox.com,},FirstTimestamp:2025-11-01 04:17:45.059098391 +0000 UTC m=+0.494551783,LastTimestamp:2025-11-01 04:17:45.059098391 +0000 UTC m=+0.494551783,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-i9e8z.gb1.brightbox.com,}" Nov 1 04:17:45.581706 env[1306]: time="2025-11-01T04:17:45.581551560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-i9e8z.gb1.brightbox.com,Uid:67328da50b364fe3e739ac52a4c79e6e,Namespace:kube-system,Attempt:0,}" Nov 1 04:17:45.586736 env[1306]: time="2025-11-01T04:17:45.586375269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-i9e8z.gb1.brightbox.com,Uid:1e855836868a1b09c1ca3f838739f4c5,Namespace:kube-system,Attempt:0,}" Nov 1 04:17:45.586736 env[1306]: time="2025-11-01T04:17:45.586465314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-i9e8z.gb1.brightbox.com,Uid:ae44e84903fd4ad1be4aa9a8ec9d7734,Namespace:kube-system,Attempt:0,}" Nov 1 04:17:45.705077 kubelet[1824]: E1101 04:17:45.704837 1824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.102.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-i9e8z.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.102.154:6443: connect: connection refused" interval="800ms" Nov 1 04:17:45.879618 kubelet[1824]: I1101 04:17:45.879401 1824 kubelet_node_status.go:75] "Attempting to register node" node="srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:45.880888 kubelet[1824]: E1101 04:17:45.880772 1824 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.102.154:6443/api/v1/nodes\": dial tcp 10.244.102.154:6443: connect: connection refused" node="srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:46.209480 kubelet[1824]: W1101 04:17:46.209236 1824 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.102.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.102.154:6443: connect: connection refused Nov 1 04:17:46.209480 kubelet[1824]: E1101 04:17:46.209392 1824 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.102.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.102.154:6443: connect: connection refused" logger="UnhandledError" Nov 1 04:17:46.359983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3372911767.mount: Deactivated successfully. Nov 1 04:17:46.361127 env[1306]: time="2025-11-01T04:17:46.361070226Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:46.362650 env[1306]: time="2025-11-01T04:17:46.362611398Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:46.363904 env[1306]: time="2025-11-01T04:17:46.363867362Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:46.365106 env[1306]: time="2025-11-01T04:17:46.365073482Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:46.368986 env[1306]: time="2025-11-01T04:17:46.368960297Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:46.371790 env[1306]: time="2025-11-01T04:17:46.371766626Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:46.398882 env[1306]: time="2025-11-01T04:17:46.398843911Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:46.401935 env[1306]: time="2025-11-01T04:17:46.401901262Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:46.403031 env[1306]: time="2025-11-01T04:17:46.401876333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 04:17:46.403031 env[1306]: time="2025-11-01T04:17:46.401946062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 04:17:46.403031 env[1306]: time="2025-11-01T04:17:46.401958774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 04:17:46.403283 env[1306]: time="2025-11-01T04:17:46.402132683Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/47de84f1d1c487d4f774e2c830af67b0fb353952406db1fe3383992e5b781d30 pid=1870 runtime=io.containerd.runc.v2 Nov 1 04:17:46.403647 env[1306]: time="2025-11-01T04:17:46.403626277Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:46.404446 env[1306]: time="2025-11-01T04:17:46.404415273Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:46.405040 env[1306]: time="2025-11-01T04:17:46.405005815Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:46.409160 env[1306]: time="2025-11-01T04:17:46.409136715Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:46.412933 env[1306]: time="2025-11-01T04:17:46.412865092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 04:17:46.413025 env[1306]: time="2025-11-01T04:17:46.412941347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 04:17:46.413025 env[1306]: time="2025-11-01T04:17:46.412976430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 04:17:46.413184 env[1306]: time="2025-11-01T04:17:46.413128235Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa56da0a8185a889cab529dfc3789dac74f1091e144d4ada65cab0cdc51b2da4 pid=1881 runtime=io.containerd.runc.v2 Nov 1 04:17:46.448369 env[1306]: time="2025-11-01T04:17:46.447858871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 04:17:46.448369 env[1306]: time="2025-11-01T04:17:46.447938575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 04:17:46.448369 env[1306]: time="2025-11-01T04:17:46.447962143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 04:17:46.449241 env[1306]: time="2025-11-01T04:17:46.448756353Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/666c65083f8d4f254751b293e04e704a4bc0d661d821963b9a24f0565bffb2f1 pid=1915 runtime=io.containerd.runc.v2 Nov 1 04:17:46.468950 kubelet[1824]: W1101 04:17:46.468832 1824 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.102.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.102.154:6443: connect: connection refused Nov 1 04:17:46.468950 kubelet[1824]: E1101 04:17:46.468885 1824 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.102.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.102.154:6443: connect: connection refused" logger="UnhandledError" Nov 1 04:17:46.490807 kubelet[1824]: W1101 04:17:46.490753 1824 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.102.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-i9e8z.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.102.154:6443: connect: connection refused Nov 1 04:17:46.490962 kubelet[1824]: E1101 04:17:46.490813 1824 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.102.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-i9e8z.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.102.154:6443: connect: connection refused" logger="UnhandledError" Nov 1 04:17:46.511346 kubelet[1824]: E1101 04:17:46.510706 1824 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.102.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-i9e8z.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.102.154:6443: connect: connection refused" interval="1.6s" Nov 1 04:17:46.517332 env[1306]: time="2025-11-01T04:17:46.517279400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-i9e8z.gb1.brightbox.com,Uid:1e855836868a1b09c1ca3f838739f4c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"47de84f1d1c487d4f774e2c830af67b0fb353952406db1fe3383992e5b781d30\"" Nov 1 04:17:46.525557 env[1306]: time="2025-11-01T04:17:46.525521364Z" level=info msg="CreateContainer within sandbox \"47de84f1d1c487d4f774e2c830af67b0fb353952406db1fe3383992e5b781d30\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 04:17:46.535221 env[1306]: time="2025-11-01T04:17:46.535177053Z" level=info msg="CreateContainer within sandbox \"47de84f1d1c487d4f774e2c830af67b0fb353952406db1fe3383992e5b781d30\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b8425aea193f3a43aa0733b50b673340fd5fc13c6e0fdb09923003cc236e1a55\"" Nov 1 04:17:46.535983 env[1306]: time="2025-11-01T04:17:46.535952846Z" level=info msg="StartContainer for \"b8425aea193f3a43aa0733b50b673340fd5fc13c6e0fdb09923003cc236e1a55\"" Nov 1 04:17:46.542021 env[1306]: time="2025-11-01T04:17:46.541994798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-i9e8z.gb1.brightbox.com,Uid:ae44e84903fd4ad1be4aa9a8ec9d7734,Namespace:kube-system,Attempt:0,} returns sandbox id \"666c65083f8d4f254751b293e04e704a4bc0d661d821963b9a24f0565bffb2f1\"" Nov 1 04:17:46.543961 env[1306]: time="2025-11-01T04:17:46.543935503Z" level=info msg="CreateContainer within sandbox \"666c65083f8d4f254751b293e04e704a4bc0d661d821963b9a24f0565bffb2f1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 04:17:46.552935 env[1306]: time="2025-11-01T04:17:46.552896060Z" level=info msg="CreateContainer within sandbox \"666c65083f8d4f254751b293e04e704a4bc0d661d821963b9a24f0565bffb2f1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e034641a8135ebfb83294dc77de73c2ec09aea89a10e8bfb416f554b99d81746\"" Nov 1 04:17:46.555563 env[1306]: time="2025-11-01T04:17:46.555504078Z" level=info msg="StartContainer for \"e034641a8135ebfb83294dc77de73c2ec09aea89a10e8bfb416f554b99d81746\"" Nov 1 04:17:46.559102 env[1306]: time="2025-11-01T04:17:46.559056416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-i9e8z.gb1.brightbox.com,Uid:67328da50b364fe3e739ac52a4c79e6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa56da0a8185a889cab529dfc3789dac74f1091e144d4ada65cab0cdc51b2da4\"" Nov 1 04:17:46.561685 env[1306]: time="2025-11-01T04:17:46.561651604Z" level=info msg="CreateContainer within sandbox \"fa56da0a8185a889cab529dfc3789dac74f1091e144d4ada65cab0cdc51b2da4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 04:17:46.571533 env[1306]: time="2025-11-01T04:17:46.571490476Z" level=info msg="CreateContainer within sandbox \"fa56da0a8185a889cab529dfc3789dac74f1091e144d4ada65cab0cdc51b2da4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d4b423172d5340bd1b2accce0f8320829814e1cf6f3a358fb29433891dca2d03\"" Nov 1 04:17:46.582841 env[1306]: time="2025-11-01T04:17:46.582803196Z" level=info msg="StartContainer for \"d4b423172d5340bd1b2accce0f8320829814e1cf6f3a358fb29433891dca2d03\"" Nov 1 04:17:46.586245 kubelet[1824]: W1101 04:17:46.586191 1824 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.102.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.102.154:6443: connect: connection refused Nov 1 04:17:46.586505 kubelet[1824]: E1101 04:17:46.586486 1824 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.102.154:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.102.154:6443: connect: connection refused" logger="UnhandledError" Nov 1 04:17:46.657983 env[1306]: time="2025-11-01T04:17:46.657940013Z" level=info msg="StartContainer for \"b8425aea193f3a43aa0733b50b673340fd5fc13c6e0fdb09923003cc236e1a55\" returns successfully" Nov 1 04:17:46.683401 kubelet[1824]: I1101 04:17:46.683006 1824 kubelet_node_status.go:75] "Attempting to register node" node="srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:46.683401 kubelet[1824]: E1101 04:17:46.683366 1824 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.102.154:6443/api/v1/nodes\": dial tcp 10.244.102.154:6443: connect: connection refused" node="srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:46.736116 env[1306]: time="2025-11-01T04:17:46.735871172Z" level=info msg="StartContainer for \"e034641a8135ebfb83294dc77de73c2ec09aea89a10e8bfb416f554b99d81746\" returns successfully" Nov 1 04:17:46.737188 env[1306]: time="2025-11-01T04:17:46.737064832Z" level=info msg="StartContainer for \"d4b423172d5340bd1b2accce0f8320829814e1cf6f3a358fb29433891dca2d03\" returns successfully" Nov 1 04:17:47.068497 kubelet[1824]: E1101 04:17:47.068374 1824 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.102.154:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.102.154:6443: connect: connection refused" logger="UnhandledError" Nov 1 04:17:47.177446 kubelet[1824]: E1101 04:17:47.177421 1824 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i9e8z.gb1.brightbox.com\" not found" node="srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:47.181627 kubelet[1824]: E1101 04:17:47.181605 1824 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i9e8z.gb1.brightbox.com\" not found" node="srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:47.184116 kubelet[1824]: E1101 04:17:47.184096 1824 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i9e8z.gb1.brightbox.com\" not found" node="srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:48.187347 kubelet[1824]: E1101 04:17:48.187300 1824 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i9e8z.gb1.brightbox.com\" not found" node="srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:48.188528 kubelet[1824]: E1101 04:17:48.188505 1824 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i9e8z.gb1.brightbox.com\" not found" node="srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:48.287883 kubelet[1824]: I1101 04:17:48.287837 1824 kubelet_node_status.go:75] "Attempting to register node" node="srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:48.869353 kubelet[1824]: E1101 04:17:48.869306 1824 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-i9e8z.gb1.brightbox.com\" not found" node="srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:49.052854 kubelet[1824]: I1101 04:17:49.052817 1824 kubelet_node_status.go:78] "Successfully registered node" node="srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:49.053085 kubelet[1824]: E1101 04:17:49.053069 1824 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-i9e8z.gb1.brightbox.com\": node \"srv-i9e8z.gb1.brightbox.com\" not found" Nov 1 04:17:49.072667 kubelet[1824]: E1101 04:17:49.072638 1824 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-i9e8z.gb1.brightbox.com\" not found" Nov 1 04:17:49.173543 kubelet[1824]: E1101 04:17:49.173458 1824 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-i9e8z.gb1.brightbox.com\" not found" Nov 1 04:17:49.191876 kubelet[1824]: E1101 04:17:49.191812 1824 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i9e8z.gb1.brightbox.com\" not found" node="srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:49.274486 kubelet[1824]: E1101 04:17:49.274414 1824 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-i9e8z.gb1.brightbox.com\" not found" Nov 1 04:17:49.374690 kubelet[1824]: E1101 04:17:49.374615 1824 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-i9e8z.gb1.brightbox.com\" not found" Nov 1 04:17:49.474991 kubelet[1824]: E1101 04:17:49.474777 1824 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-i9e8z.gb1.brightbox.com\" not found" Nov 1 04:17:49.576040 kubelet[1824]: E1101 04:17:49.575953 1824 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-i9e8z.gb1.brightbox.com\" not found" Nov 1 04:17:49.698740 kubelet[1824]: I1101 04:17:49.698630 1824 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:49.712172 kubelet[1824]: E1101 04:17:49.712040 1824 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-i9e8z.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:49.712172 kubelet[1824]: I1101 04:17:49.712108 1824 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:49.717979 kubelet[1824]: E1101 04:17:49.717881 1824 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-i9e8z.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:49.717979 kubelet[1824]: I1101 04:17:49.717956 1824 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:49.723264 kubelet[1824]: E1101 04:17:49.723213 1824 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-i9e8z.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:50.047825 kubelet[1824]: I1101 04:17:50.047725 1824 apiserver.go:52] "Watching apiserver" Nov 1 04:17:50.101016 kubelet[1824]: I1101 04:17:50.100793 1824 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 04:17:51.054344 systemd[1]: Reloading. Nov 1 04:17:51.149953 /usr/lib/systemd/system-generators/torcx-generator[2116]: time="2025-11-01T04:17:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 04:17:51.150440 /usr/lib/systemd/system-generators/torcx-generator[2116]: time="2025-11-01T04:17:51Z" level=info msg="torcx already run" Nov 1 04:17:51.261673 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 04:17:51.261696 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 04:17:51.285526 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 04:17:51.416208 kubelet[1824]: I1101 04:17:51.416054 1824 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 04:17:51.417677 systemd[1]: Stopping kubelet.service... Nov 1 04:17:51.440317 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 04:17:51.441089 systemd[1]: Stopped kubelet.service. Nov 1 04:17:51.454833 kernel: kauditd_printk_skb: 38 callbacks suppressed Nov 1 04:17:51.455295 kernel: audit: type=1131 audit(1761970671.440:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:51.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:51.461667 systemd[1]: Starting kubelet.service... Nov 1 04:17:52.520350 kernel: audit: type=1130 audit(1761970672.510:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:52.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:17:52.510660 systemd[1]: Started kubelet.service. Nov 1 04:17:52.615005 kubelet[2178]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 04:17:52.615005 kubelet[2178]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 04:17:52.616121 kubelet[2178]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 04:17:52.617199 kubelet[2178]: I1101 04:17:52.617135 2178 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 04:17:52.632945 kubelet[2178]: I1101 04:17:52.632907 2178 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 04:17:52.633114 kubelet[2178]: I1101 04:17:52.633101 2178 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 04:17:52.633553 kubelet[2178]: I1101 04:17:52.633535 2178 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 04:17:52.637101 kubelet[2178]: I1101 04:17:52.637078 2178 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 04:17:52.656685 kubelet[2178]: I1101 04:17:52.656654 2178 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 04:17:52.674884 kubelet[2178]: E1101 04:17:52.674840 2178 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 04:17:52.674884 kubelet[2178]: I1101 04:17:52.674892 2178 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 04:17:52.680244 kubelet[2178]: I1101 04:17:52.680224 2178 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 04:17:52.680970 kubelet[2178]: I1101 04:17:52.680926 2178 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 04:17:52.681296 kubelet[2178]: I1101 04:17:52.681101 2178 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-i9e8z.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 04:17:52.681509 kubelet[2178]: I1101 04:17:52.681495 2178 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 04:17:52.681596 kubelet[2178]: I1101 04:17:52.681587 2178 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 04:17:52.681708 kubelet[2178]: I1101 04:17:52.681699 2178 state_mem.go:36] "Initialized new in-memory state store" Nov 1 04:17:52.681945 kubelet[2178]: I1101 04:17:52.681935 2178 kubelet.go:446] "Attempting to sync node with API server" Nov 1 04:17:52.682420 kubelet[2178]: I1101 04:17:52.682366 2178 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 04:17:52.682529 kubelet[2178]: I1101 04:17:52.682519 2178 kubelet.go:352] "Adding apiserver pod source" Nov 1 04:17:52.682964 kubelet[2178]: I1101 04:17:52.682932 2178 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 04:17:52.708765 kubelet[2178]: I1101 04:17:52.706839 2178 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 04:17:52.708765 kubelet[2178]: I1101 04:17:52.707341 2178 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 04:17:52.708765 kubelet[2178]: I1101 04:17:52.707795 2178 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 04:17:52.708765 kubelet[2178]: I1101 04:17:52.707829 2178 server.go:1287] "Started kubelet" Nov 1 04:17:52.711440 kubelet[2178]: I1101 04:17:52.711373 2178 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 04:17:52.713643 kubelet[2178]: I1101 04:17:52.713626 2178 server.go:479] "Adding debug handlers to kubelet server" Nov 1 04:17:52.721039 kernel: audit: type=1400 audit(1761970672.716:232): avc: denied { mac_admin } for pid=2178 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:17:52.721153 kernel: audit: type=1401 audit(1761970672.716:232): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 04:17:52.716000 audit[2178]: AVC avc: denied { mac_admin } for pid=2178 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:17:52.716000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 04:17:52.727208 kernel: audit: type=1300 audit(1761970672.716:232): arch=c000003e syscall=188 success=no exit=-22 a0=c000bd2540 a1=c0009011a0 a2=c000bd2510 a3=25 items=0 ppid=1 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:52.716000 audit[2178]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000bd2540 a1=c0009011a0 a2=c000bd2510 a3=25 items=0 ppid=1 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:52.727532 kubelet[2178]: I1101 04:17:52.723246 2178 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Nov 1 04:17:52.727532 kubelet[2178]: I1101 04:17:52.723334 2178 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Nov 1 04:17:52.727532 kubelet[2178]: I1101 04:17:52.723363 2178 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 04:17:52.733286 kernel: audit: type=1327 audit(1761970672.716:232): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 04:17:52.716000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 04:17:52.733475 kubelet[2178]: I1101 04:17:52.728541 2178 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 04:17:52.733475 kubelet[2178]: I1101 04:17:52.728764 2178 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 04:17:52.733835 kubelet[2178]: I1101 04:17:52.733808 2178 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 04:17:52.735979 kubelet[2178]: I1101 04:17:52.735446 2178 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 04:17:52.735979 kubelet[2178]: I1101 04:17:52.735593 2178 reconciler.go:26] "Reconciler: start to sync state" Nov 1 04:17:52.738273 kubelet[2178]: I1101 04:17:52.738256 2178 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 04:17:52.747718 kernel: audit: type=1400 audit(1761970672.721:233): avc: denied { mac_admin } for pid=2178 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:17:52.721000 audit[2178]: AVC avc: denied { mac_admin } for pid=2178 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:17:52.747908 kubelet[2178]: I1101 04:17:52.744329 2178 factory.go:221] Registration of the systemd container factory successfully Nov 1 04:17:52.747908 kubelet[2178]: I1101 04:17:52.744422 2178 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 04:17:52.721000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 04:17:52.721000 audit[2178]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000af7180 a1=c000901560 a2=c000bd29c0 a3=25 items=0 ppid=1 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:52.757987 kernel: audit: type=1401 audit(1761970672.721:233): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 04:17:52.758108 kernel: audit: type=1300 audit(1761970672.721:233): arch=c000003e syscall=188 success=no exit=-22 a0=c000af7180 a1=c000901560 a2=c000bd29c0 a3=25 items=0 ppid=1 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:52.758974 kubelet[2178]: E1101 04:17:52.757118 2178 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 04:17:52.721000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 04:17:52.764839 kernel: audit: type=1327 audit(1761970672.721:233): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 04:17:52.773391 kubelet[2178]: I1101 04:17:52.772262 2178 factory.go:221] Registration of the containerd container factory successfully Nov 1 04:17:52.824542 kubelet[2178]: I1101 04:17:52.823759 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 04:17:52.836006 kubelet[2178]: I1101 04:17:52.835949 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 04:17:52.836006 kubelet[2178]: I1101 04:17:52.835983 2178 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 04:17:52.836006 kubelet[2178]: I1101 04:17:52.836002 2178 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 04:17:52.836006 kubelet[2178]: I1101 04:17:52.836018 2178 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 04:17:52.836263 kubelet[2178]: E1101 04:17:52.836062 2178 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 04:17:52.873293 kubelet[2178]: I1101 04:17:52.872653 2178 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 04:17:52.873293 kubelet[2178]: I1101 04:17:52.872671 2178 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 04:17:52.873293 kubelet[2178]: I1101 04:17:52.872690 2178 state_mem.go:36] "Initialized new in-memory state store" Nov 1 04:17:52.873293 kubelet[2178]: I1101 04:17:52.872875 2178 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 04:17:52.873293 kubelet[2178]: I1101 04:17:52.872885 2178 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 04:17:52.873293 kubelet[2178]: I1101 04:17:52.872903 2178 policy_none.go:49] "None policy: Start" Nov 1 04:17:52.873293 kubelet[2178]: I1101 04:17:52.872914 2178 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 04:17:52.873293 kubelet[2178]: I1101 04:17:52.872929 2178 state_mem.go:35] "Initializing new in-memory state store" Nov 1 04:17:52.873293 kubelet[2178]: I1101 04:17:52.873054 2178 state_mem.go:75] "Updated machine memory state" Nov 1 04:17:52.874000 audit[2178]: AVC avc: denied { mac_admin } for pid=2178 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:17:52.874000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Nov 1 04:17:52.874000 audit[2178]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00127b0b0 a1=c0012474a0 a2=c00127b080 a3=25 items=0 ppid=1 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:52.874000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Nov 1 04:17:52.875107 kubelet[2178]: I1101 04:17:52.874307 2178 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 04:17:52.875107 kubelet[2178]: I1101 04:17:52.874452 2178 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Nov 1 04:17:52.875107 kubelet[2178]: I1101 04:17:52.874616 2178 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 04:17:52.875107 kubelet[2178]: I1101 04:17:52.874629 2178 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 04:17:52.877135 kubelet[2178]: I1101 04:17:52.876101 2178 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 04:17:52.880705 kubelet[2178]: E1101 04:17:52.880681 2178 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 04:17:52.941263 kubelet[2178]: I1101 04:17:52.941224 2178 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:52.944976 kubelet[2178]: I1101 04:17:52.944947 2178 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:52.950063 kubelet[2178]: W1101 04:17:52.950039 2178 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 04:17:52.950491 kubelet[2178]: W1101 04:17:52.950474 2178 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 04:17:52.951769 kubelet[2178]: I1101 04:17:52.951437 2178 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:52.961299 kubelet[2178]: W1101 04:17:52.961275 2178 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 04:17:53.010537 kubelet[2178]: I1101 04:17:53.010468 2178 kubelet_node_status.go:75] "Attempting to register node" node="srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:53.034173 kubelet[2178]: I1101 04:17:53.034072 2178 kubelet_node_status.go:124] "Node was previously registered" node="srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:53.034173 kubelet[2178]: I1101 04:17:53.034171 2178 kubelet_node_status.go:78] "Successfully registered node" node="srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:53.041815 kubelet[2178]: I1101 04:17:53.041780 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1e855836868a1b09c1ca3f838739f4c5-k8s-certs\") pod \"kube-controller-manager-srv-i9e8z.gb1.brightbox.com\" (UID: \"1e855836868a1b09c1ca3f838739f4c5\") " pod="kube-system/kube-controller-manager-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:53.041976 kubelet[2178]: I1101 04:17:53.041814 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1e855836868a1b09c1ca3f838739f4c5-kubeconfig\") pod \"kube-controller-manager-srv-i9e8z.gb1.brightbox.com\" (UID: \"1e855836868a1b09c1ca3f838739f4c5\") " pod="kube-system/kube-controller-manager-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:53.041976 kubelet[2178]: I1101 04:17:53.041871 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ae44e84903fd4ad1be4aa9a8ec9d7734-kubeconfig\") pod \"kube-scheduler-srv-i9e8z.gb1.brightbox.com\" (UID: \"ae44e84903fd4ad1be4aa9a8ec9d7734\") " pod="kube-system/kube-scheduler-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:53.041976 kubelet[2178]: I1101 04:17:53.041888 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/67328da50b364fe3e739ac52a4c79e6e-ca-certs\") pod \"kube-apiserver-srv-i9e8z.gb1.brightbox.com\" (UID: \"67328da50b364fe3e739ac52a4c79e6e\") " pod="kube-system/kube-apiserver-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:53.041976 kubelet[2178]: I1101 04:17:53.041931 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/67328da50b364fe3e739ac52a4c79e6e-k8s-certs\") pod \"kube-apiserver-srv-i9e8z.gb1.brightbox.com\" (UID: \"67328da50b364fe3e739ac52a4c79e6e\") " pod="kube-system/kube-apiserver-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:53.041976 kubelet[2178]: I1101 04:17:53.041948 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/67328da50b364fe3e739ac52a4c79e6e-usr-share-ca-certificates\") pod \"kube-apiserver-srv-i9e8z.gb1.brightbox.com\" (UID: \"67328da50b364fe3e739ac52a4c79e6e\") " pod="kube-system/kube-apiserver-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:53.042157 kubelet[2178]: I1101 04:17:53.041986 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1e855836868a1b09c1ca3f838739f4c5-ca-certs\") pod \"kube-controller-manager-srv-i9e8z.gb1.brightbox.com\" (UID: \"1e855836868a1b09c1ca3f838739f4c5\") " pod="kube-system/kube-controller-manager-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:53.042157 kubelet[2178]: I1101 04:17:53.042004 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1e855836868a1b09c1ca3f838739f4c5-flexvolume-dir\") pod \"kube-controller-manager-srv-i9e8z.gb1.brightbox.com\" (UID: \"1e855836868a1b09c1ca3f838739f4c5\") " pod="kube-system/kube-controller-manager-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:53.042157 kubelet[2178]: I1101 04:17:53.042022 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1e855836868a1b09c1ca3f838739f4c5-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-i9e8z.gb1.brightbox.com\" (UID: \"1e855836868a1b09c1ca3f838739f4c5\") " pod="kube-system/kube-controller-manager-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:53.706773 kubelet[2178]: I1101 04:17:53.706730 2178 apiserver.go:52] "Watching apiserver" Nov 1 04:17:53.735942 kubelet[2178]: I1101 04:17:53.735846 2178 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 04:17:53.835517 kubelet[2178]: I1101 04:17:53.835447 2178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-i9e8z.gb1.brightbox.com" podStartSLOduration=1.835423793 podStartE2EDuration="1.835423793s" podCreationTimestamp="2025-11-01 04:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 04:17:53.820566201 +0000 UTC m=+1.281468898" watchObservedRunningTime="2025-11-01 04:17:53.835423793 +0000 UTC m=+1.296326509" Nov 1 04:17:53.845440 kubelet[2178]: I1101 04:17:53.845392 2178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-i9e8z.gb1.brightbox.com" podStartSLOduration=1.84536297 podStartE2EDuration="1.84536297s" podCreationTimestamp="2025-11-01 04:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 04:17:53.835752877 +0000 UTC m=+1.296655598" watchObservedRunningTime="2025-11-01 04:17:53.84536297 +0000 UTC m=+1.306265666" Nov 1 04:17:53.855283 kubelet[2178]: I1101 04:17:53.855233 2178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-i9e8z.gb1.brightbox.com" podStartSLOduration=1.855207576 podStartE2EDuration="1.855207576s" podCreationTimestamp="2025-11-01 04:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 04:17:53.845760051 +0000 UTC m=+1.306662752" watchObservedRunningTime="2025-11-01 04:17:53.855207576 +0000 UTC m=+1.316110294" Nov 1 04:17:53.869244 kubelet[2178]: I1101 04:17:53.869186 2178 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:53.883789 kubelet[2178]: W1101 04:17:53.883723 2178 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 04:17:53.884020 kubelet[2178]: E1101 04:17:53.883855 2178 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-i9e8z.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-i9e8z.gb1.brightbox.com" Nov 1 04:17:55.361088 kubelet[2178]: I1101 04:17:55.361012 2178 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 04:17:55.362179 env[1306]: time="2025-11-01T04:17:55.361779781Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 04:17:55.363697 kubelet[2178]: I1101 04:17:55.363653 2178 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 04:17:56.364575 kubelet[2178]: I1101 04:17:56.364500 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8247560-755f-46d7-b05b-99ad2c0e1c7d-xtables-lock\") pod \"kube-proxy-bqqxs\" (UID: \"e8247560-755f-46d7-b05b-99ad2c0e1c7d\") " pod="kube-system/kube-proxy-bqqxs" Nov 1 04:17:56.365604 kubelet[2178]: I1101 04:17:56.365556 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e8247560-755f-46d7-b05b-99ad2c0e1c7d-kube-proxy\") pod \"kube-proxy-bqqxs\" (UID: \"e8247560-755f-46d7-b05b-99ad2c0e1c7d\") " pod="kube-system/kube-proxy-bqqxs" Nov 1 04:17:56.365843 kubelet[2178]: I1101 04:17:56.365802 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8247560-755f-46d7-b05b-99ad2c0e1c7d-lib-modules\") pod \"kube-proxy-bqqxs\" (UID: \"e8247560-755f-46d7-b05b-99ad2c0e1c7d\") " pod="kube-system/kube-proxy-bqqxs" Nov 1 04:17:56.366070 kubelet[2178]: I1101 04:17:56.366032 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lwpc\" (UniqueName: \"kubernetes.io/projected/e8247560-755f-46d7-b05b-99ad2c0e1c7d-kube-api-access-6lwpc\") pod \"kube-proxy-bqqxs\" (UID: \"e8247560-755f-46d7-b05b-99ad2c0e1c7d\") " pod="kube-system/kube-proxy-bqqxs" Nov 1 04:17:56.486526 kubelet[2178]: I1101 04:17:56.486480 2178 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 04:17:56.567906 kubelet[2178]: I1101 04:17:56.567709 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrfgm\" (UniqueName: \"kubernetes.io/projected/de4509a5-f258-4e5a-834c-cb73b965a12d-kube-api-access-lrfgm\") pod \"tigera-operator-7dcd859c48-nt78v\" (UID: \"de4509a5-f258-4e5a-834c-cb73b965a12d\") " pod="tigera-operator/tigera-operator-7dcd859c48-nt78v" Nov 1 04:17:56.568491 kubelet[2178]: I1101 04:17:56.568447 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/de4509a5-f258-4e5a-834c-cb73b965a12d-var-lib-calico\") pod \"tigera-operator-7dcd859c48-nt78v\" (UID: \"de4509a5-f258-4e5a-834c-cb73b965a12d\") " pod="tigera-operator/tigera-operator-7dcd859c48-nt78v" Nov 1 04:17:56.583521 env[1306]: time="2025-11-01T04:17:56.583316588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bqqxs,Uid:e8247560-755f-46d7-b05b-99ad2c0e1c7d,Namespace:kube-system,Attempt:0,}" Nov 1 04:17:56.605145 env[1306]: time="2025-11-01T04:17:56.605060348Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 04:17:56.605145 env[1306]: time="2025-11-01T04:17:56.605121268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 04:17:56.605437 env[1306]: time="2025-11-01T04:17:56.605405624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 04:17:56.605659 env[1306]: time="2025-11-01T04:17:56.605627524Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d2fbcdc7ce437df0a1fbba87fb442c9f79021f9d1115ea6c8452363e039296c pid=2233 runtime=io.containerd.runc.v2 Nov 1 04:17:56.662744 env[1306]: time="2025-11-01T04:17:56.662698081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bqqxs,Uid:e8247560-755f-46d7-b05b-99ad2c0e1c7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d2fbcdc7ce437df0a1fbba87fb442c9f79021f9d1115ea6c8452363e039296c\"" Nov 1 04:17:56.667430 env[1306]: time="2025-11-01T04:17:56.667387544Z" level=info msg="CreateContainer within sandbox \"1d2fbcdc7ce437df0a1fbba87fb442c9f79021f9d1115ea6c8452363e039296c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 04:17:56.683396 env[1306]: time="2025-11-01T04:17:56.683352652Z" level=info msg="CreateContainer within sandbox \"1d2fbcdc7ce437df0a1fbba87fb442c9f79021f9d1115ea6c8452363e039296c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"079565a8115af57ab09fe21fd56dd8070913c79c22f10fa4ac361f583c3b3dfb\"" Nov 1 04:17:56.685683 env[1306]: time="2025-11-01T04:17:56.685650356Z" level=info msg="StartContainer for \"079565a8115af57ab09fe21fd56dd8070913c79c22f10fa4ac361f583c3b3dfb\"" Nov 1 04:17:56.748625 env[1306]: time="2025-11-01T04:17:56.748571554Z" level=info msg="StartContainer for \"079565a8115af57ab09fe21fd56dd8070913c79c22f10fa4ac361f583c3b3dfb\" returns successfully" Nov 1 04:17:56.760271 env[1306]: time="2025-11-01T04:17:56.760142678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-nt78v,Uid:de4509a5-f258-4e5a-834c-cb73b965a12d,Namespace:tigera-operator,Attempt:0,}" Nov 1 04:17:56.774471 env[1306]: time="2025-11-01T04:17:56.774405298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 04:17:56.774703 env[1306]: time="2025-11-01T04:17:56.774485399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 04:17:56.774703 env[1306]: time="2025-11-01T04:17:56.774509038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 04:17:56.774886 env[1306]: time="2025-11-01T04:17:56.774841506Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/981f34b35bebe273e30f5e78f0959b114fc52182b6aa57c859ace40e2a1bd88c pid=2312 runtime=io.containerd.runc.v2 Nov 1 04:17:56.872175 env[1306]: time="2025-11-01T04:17:56.872108372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-nt78v,Uid:de4509a5-f258-4e5a-834c-cb73b965a12d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"981f34b35bebe273e30f5e78f0959b114fc52182b6aa57c859ace40e2a1bd88c\"" Nov 1 04:17:56.875788 env[1306]: time="2025-11-01T04:17:56.874772788Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 04:17:57.106000 audit[2379]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2379 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.111475 kernel: kauditd_printk_skb: 4 callbacks suppressed Nov 1 04:17:57.111601 kernel: audit: type=1325 audit(1761970677.106:235): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2379 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.117516 kernel: audit: type=1300 audit(1761970677.106:235): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa2417dd0 a2=0 a3=7fffa2417dbc items=0 ppid=2289 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.106000 audit[2379]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa2417dd0 a2=0 a3=7fffa2417dbc items=0 ppid=2289 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.106000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 04:17:57.126046 kernel: audit: type=1327 audit(1761970677.106:235): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 04:17:57.126235 kernel: audit: type=1325 audit(1761970677.114:236): table=nat:39 family=2 entries=1 op=nft_register_chain pid=2380 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.126307 kernel: audit: type=1300 audit(1761970677.114:236): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb8204460 a2=0 a3=7ffeb820444c items=0 ppid=2289 pid=2380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.114000 audit[2380]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2380 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.114000 audit[2380]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb8204460 a2=0 a3=7ffeb820444c items=0 ppid=2289 pid=2380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.127855 kernel: audit: type=1327 audit(1761970677.114:236): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 04:17:57.114000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 04:17:57.116000 audit[2381]: NETFILTER_CFG table=mangle:40 family=10 entries=1 op=nft_register_chain pid=2381 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.132042 kernel: audit: type=1325 audit(1761970677.116:237): table=mangle:40 family=10 entries=1 op=nft_register_chain pid=2381 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.132134 kernel: audit: type=1300 audit(1761970677.116:237): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff07c610c0 a2=0 a3=7fff07c610ac items=0 ppid=2289 pid=2381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.116000 audit[2381]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff07c610c0 a2=0 a3=7fff07c610ac items=0 ppid=2289 pid=2381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.116000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 04:17:57.138445 kernel: audit: type=1327 audit(1761970677.116:237): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Nov 1 04:17:57.138526 kernel: audit: type=1325 audit(1761970677.127:238): table=nat:41 family=10 entries=1 op=nft_register_chain pid=2382 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.127000 audit[2382]: NETFILTER_CFG table=nat:41 family=10 entries=1 op=nft_register_chain pid=2382 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.127000 audit[2382]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec4d77f60 a2=0 a3=7ffec4d77f4c items=0 ppid=2289 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.127000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Nov 1 04:17:57.141000 audit[2383]: NETFILTER_CFG table=filter:42 family=10 entries=1 op=nft_register_chain pid=2383 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.141000 audit[2383]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd60fe1840 a2=0 a3=7ffd60fe182c items=0 ppid=2289 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.141000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 04:17:57.143000 audit[2384]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2384 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.143000 audit[2384]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffce18eac0 a2=0 a3=7fffce18eaac items=0 ppid=2289 pid=2384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.143000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Nov 1 04:17:57.230000 audit[2385]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.230000 audit[2385]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe0e3b1eb0 a2=0 a3=7ffe0e3b1e9c items=0 ppid=2289 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.230000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Nov 1 04:17:57.239000 audit[2387]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2387 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.239000 audit[2387]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdb69fec70 a2=0 a3=7ffdb69fec5c items=0 ppid=2289 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.239000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Nov 1 04:17:57.251000 audit[2390]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2390 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.251000 audit[2390]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd26024990 a2=0 a3=7ffd2602497c items=0 ppid=2289 pid=2390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.251000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Nov 1 04:17:57.254000 audit[2391]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.254000 audit[2391]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc74f12e60 a2=0 a3=7ffc74f12e4c items=0 ppid=2289 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.254000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Nov 1 04:17:57.257000 audit[2393]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2393 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.257000 audit[2393]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcab0ead60 a2=0 a3=7ffcab0ead4c items=0 ppid=2289 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.257000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Nov 1 04:17:57.259000 audit[2394]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2394 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.259000 audit[2394]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffdd8bc8a0 a2=0 a3=7fffdd8bc88c items=0 ppid=2289 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.259000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Nov 1 04:17:57.262000 audit[2396]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2396 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.262000 audit[2396]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffef45042e0 a2=0 a3=7ffef45042cc items=0 ppid=2289 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.262000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Nov 1 04:17:57.266000 audit[2399]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2399 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.266000 audit[2399]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdb9569f20 a2=0 a3=7ffdb9569f0c items=0 ppid=2289 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.266000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Nov 1 04:17:57.268000 audit[2400]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2400 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.268000 audit[2400]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd647900f0 a2=0 a3=7ffd647900dc items=0 ppid=2289 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.268000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Nov 1 04:17:57.271000 audit[2402]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2402 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.271000 audit[2402]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc1830ecd0 a2=0 a3=7ffc1830ecbc items=0 ppid=2289 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.271000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Nov 1 04:17:57.272000 audit[2403]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2403 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.272000 audit[2403]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd8cab4f10 a2=0 a3=7ffd8cab4efc items=0 ppid=2289 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.272000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Nov 1 04:17:57.276000 audit[2405]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2405 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.276000 audit[2405]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeed108090 a2=0 a3=7ffeed10807c items=0 ppid=2289 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.276000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 04:17:57.280000 audit[2408]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2408 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.280000 audit[2408]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffff97f03d0 a2=0 a3=7ffff97f03bc items=0 ppid=2289 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.280000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 04:17:57.285000 audit[2411]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2411 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.285000 audit[2411]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc2b6a8f90 a2=0 a3=7ffc2b6a8f7c items=0 ppid=2289 pid=2411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.285000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Nov 1 04:17:57.287000 audit[2412]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2412 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.287000 audit[2412]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc9cba02e0 a2=0 a3=7ffc9cba02cc items=0 ppid=2289 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.287000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Nov 1 04:17:57.291000 audit[2414]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2414 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.291000 audit[2414]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffca9853d00 a2=0 a3=7ffca9853cec items=0 ppid=2289 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.291000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 04:17:57.296000 audit[2417]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2417 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.296000 audit[2417]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc988bac00 a2=0 a3=7ffc988babec items=0 ppid=2289 pid=2417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.296000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 04:17:57.297000 audit[2418]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2418 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.297000 audit[2418]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6b26b6a0 a2=0 a3=7ffc6b26b68c items=0 ppid=2289 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.297000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Nov 1 04:17:57.301000 audit[2420]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2420 subj=system_u:system_r:kernel_t:s0 comm="iptables" Nov 1 04:17:57.301000 audit[2420]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffcf0199ca0 a2=0 a3=7ffcf0199c8c items=0 ppid=2289 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.301000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Nov 1 04:17:57.337000 audit[2426]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2426 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:17:57.337000 audit[2426]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdb3dbc4b0 a2=0 a3=7ffdb3dbc49c items=0 ppid=2289 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.337000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:17:57.348000 audit[2426]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2426 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:17:57.348000 audit[2426]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffdb3dbc4b0 a2=0 a3=7ffdb3dbc49c items=0 ppid=2289 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:17:57.352000 audit[2431]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2431 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.352000 audit[2431]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffd7c17bc0 a2=0 a3=7fffd7c17bac items=0 ppid=2289 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.352000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Nov 1 04:17:57.361000 audit[2433]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2433 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.361000 audit[2433]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd04357dc0 a2=0 a3=7ffd04357dac items=0 ppid=2289 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.361000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Nov 1 04:17:57.366000 audit[2436]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2436 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.366000 audit[2436]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd9cff5f40 a2=0 a3=7ffd9cff5f2c items=0 ppid=2289 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.366000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Nov 1 04:17:57.368000 audit[2437]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2437 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.368000 audit[2437]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd77ae1660 a2=0 a3=7ffd77ae164c items=0 ppid=2289 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.368000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Nov 1 04:17:57.371000 audit[2439]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2439 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.371000 audit[2439]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff56fe11b0 a2=0 a3=7fff56fe119c items=0 ppid=2289 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.371000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Nov 1 04:17:57.372000 audit[2440]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2440 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.372000 audit[2440]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb6cfe560 a2=0 a3=7ffcb6cfe54c items=0 ppid=2289 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.372000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Nov 1 04:17:57.375000 audit[2442]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2442 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.375000 audit[2442]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffce7fe0b30 a2=0 a3=7ffce7fe0b1c items=0 ppid=2289 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.375000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Nov 1 04:17:57.380000 audit[2445]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2445 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.380000 audit[2445]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffda42c19e0 a2=0 a3=7ffda42c19cc items=0 ppid=2289 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.380000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Nov 1 04:17:57.382000 audit[2446]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2446 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.382000 audit[2446]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdb9c5f280 a2=0 a3=7ffdb9c5f26c items=0 ppid=2289 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.382000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Nov 1 04:17:57.385000 audit[2448]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2448 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.385000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc0d7fb230 a2=0 a3=7ffc0d7fb21c items=0 ppid=2289 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.385000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Nov 1 04:17:57.387000 audit[2449]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2449 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.387000 audit[2449]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1a670b00 a2=0 a3=7ffe1a670aec items=0 ppid=2289 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.387000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Nov 1 04:17:57.391000 audit[2451]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2451 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.391000 audit[2451]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff215c0400 a2=0 a3=7fff215c03ec items=0 ppid=2289 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.391000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Nov 1 04:17:57.395000 audit[2454]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2454 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.395000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcc1bc6070 a2=0 a3=7ffcc1bc605c items=0 ppid=2289 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.395000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Nov 1 04:17:57.401000 audit[2457]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2457 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.401000 audit[2457]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff543dc300 a2=0 a3=7fff543dc2ec items=0 ppid=2289 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.401000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Nov 1 04:17:57.402000 audit[2458]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2458 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.402000 audit[2458]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcda280c10 a2=0 a3=7ffcda280bfc items=0 ppid=2289 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.402000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Nov 1 04:17:57.405000 audit[2460]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.405000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffcb8c55e60 a2=0 a3=7ffcb8c55e4c items=0 ppid=2289 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.405000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 04:17:57.410000 audit[2463]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2463 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.410000 audit[2463]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fffb01f95a0 a2=0 a3=7fffb01f958c items=0 ppid=2289 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.410000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Nov 1 04:17:57.411000 audit[2464]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.411000 audit[2464]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8085bd20 a2=0 a3=7ffe8085bd0c items=0 ppid=2289 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.411000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Nov 1 04:17:57.414000 audit[2466]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2466 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.414000 audit[2466]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc7d606680 a2=0 a3=7ffc7d60666c items=0 ppid=2289 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.414000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Nov 1 04:17:57.416000 audit[2467]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2467 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.416000 audit[2467]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd19a65860 a2=0 a3=7ffd19a6584c items=0 ppid=2289 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.416000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Nov 1 04:17:57.419000 audit[2469]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2469 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.419000 audit[2469]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcc95bd210 a2=0 a3=7ffcc95bd1fc items=0 ppid=2289 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.419000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 04:17:57.423000 audit[2472]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2472 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Nov 1 04:17:57.423000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffefc955d70 a2=0 a3=7ffefc955d5c items=0 ppid=2289 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.423000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Nov 1 04:17:57.427000 audit[2474]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2474 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 04:17:57.427000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fffe4f19180 a2=0 a3=7fffe4f1916c items=0 ppid=2289 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.427000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:17:57.428000 audit[2474]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Nov 1 04:17:57.428000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fffe4f19180 a2=0 a3=7fffe4f1916c items=0 ppid=2289 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:17:57.428000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:17:57.504509 systemd[1]: run-containerd-runc-k8s.io-1d2fbcdc7ce437df0a1fbba87fb442c9f79021f9d1115ea6c8452363e039296c-runc.XCMKvF.mount: Deactivated successfully. Nov 1 04:17:57.641130 kubelet[2178]: I1101 04:17:57.641058 2178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bqqxs" podStartSLOduration=1.6410126489999999 podStartE2EDuration="1.641012649s" podCreationTimestamp="2025-11-01 04:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 04:17:56.903065228 +0000 UTC m=+4.363967946" watchObservedRunningTime="2025-11-01 04:17:57.641012649 +0000 UTC m=+5.101915367" Nov 1 04:17:58.813637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2634556321.mount: Deactivated successfully. Nov 1 04:17:59.779148 env[1306]: time="2025-11-01T04:17:59.779099527Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:59.781165 env[1306]: time="2025-11-01T04:17:59.781137513Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:59.783165 env[1306]: time="2025-11-01T04:17:59.783137772Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:59.785014 env[1306]: time="2025-11-01T04:17:59.784991489Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:17:59.785733 env[1306]: time="2025-11-01T04:17:59.785705448Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 04:17:59.797101 env[1306]: time="2025-11-01T04:17:59.797058372Z" level=info msg="CreateContainer within sandbox \"981f34b35bebe273e30f5e78f0959b114fc52182b6aa57c859ace40e2a1bd88c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 04:17:59.807093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount96111635.mount: Deactivated successfully. Nov 1 04:17:59.815798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2307708977.mount: Deactivated successfully. Nov 1 04:17:59.817866 env[1306]: time="2025-11-01T04:17:59.817831556Z" level=info msg="CreateContainer within sandbox \"981f34b35bebe273e30f5e78f0959b114fc52182b6aa57c859ace40e2a1bd88c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"75cc814aee950db1e6dcc4fa1ee6ea239ab04d844853f85f1b542973801f3217\"" Nov 1 04:17:59.818759 env[1306]: time="2025-11-01T04:17:59.818735492Z" level=info msg="StartContainer for \"75cc814aee950db1e6dcc4fa1ee6ea239ab04d844853f85f1b542973801f3217\"" Nov 1 04:17:59.876115 env[1306]: time="2025-11-01T04:17:59.876062255Z" level=info msg="StartContainer for \"75cc814aee950db1e6dcc4fa1ee6ea239ab04d844853f85f1b542973801f3217\" returns successfully" Nov 1 04:17:59.907792 kubelet[2178]: I1101 04:17:59.907743 2178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-nt78v" podStartSLOduration=0.989621176 podStartE2EDuration="3.907721902s" podCreationTimestamp="2025-11-01 04:17:56 +0000 UTC" firstStartedPulling="2025-11-01 04:17:56.873741522 +0000 UTC m=+4.334644235" lastFinishedPulling="2025-11-01 04:17:59.791842265 +0000 UTC m=+7.252744961" observedRunningTime="2025-11-01 04:17:59.907222752 +0000 UTC m=+7.368125497" watchObservedRunningTime="2025-11-01 04:17:59.907721902 +0000 UTC m=+7.368624620" Nov 1 04:18:06.978280 sudo[1532]: pam_unix(sudo:session): session closed for user root Nov 1 04:18:06.986801 kernel: kauditd_printk_skb: 143 callbacks suppressed Nov 1 04:18:06.987260 kernel: audit: type=1106 audit(1761970686.977:286): pid=1532 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 04:18:06.977000 audit[1532]: USER_END pid=1532 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 04:18:06.977000 audit[1532]: CRED_DISP pid=1532 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 04:18:06.996352 kernel: audit: type=1104 audit(1761970686.977:287): pid=1532 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Nov 1 04:18:07.137254 sshd[1528]: pam_unix(sshd:session): session closed for user core Nov 1 04:18:07.145903 kernel: audit: type=1106 audit(1761970687.138:288): pid=1528 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:18:07.138000 audit[1528]: USER_END pid=1528 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:18:07.139000 audit[1528]: CRED_DISP pid=1528 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:18:07.151343 kernel: audit: type=1104 audit(1761970687.139:289): pid=1528 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:18:07.154411 systemd[1]: sshd@8-10.244.102.154:22-139.178.89.65:41926.service: Deactivated successfully. Nov 1 04:18:07.155939 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 04:18:07.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.244.102.154:22-139.178.89.65:41926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:18:07.156575 systemd-logind[1298]: Session 9 logged out. Waiting for processes to exit. Nov 1 04:18:07.168423 kernel: audit: type=1131 audit(1761970687.153:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.244.102.154:22-139.178.89.65:41926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:18:07.157719 systemd-logind[1298]: Removed session 9. Nov 1 04:18:07.852000 audit[2554]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2554 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:07.860762 kernel: audit: type=1325 audit(1761970687.852:291): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2554 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:07.852000 audit[2554]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff8c74d010 a2=0 a3=7fff8c74cffc items=0 ppid=2289 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:07.867390 kernel: audit: type=1300 audit(1761970687.852:291): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff8c74d010 a2=0 a3=7fff8c74cffc items=0 ppid=2289 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:07.852000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:07.875342 kernel: audit: type=1327 audit(1761970687.852:291): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:07.861000 audit[2554]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2554 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:07.882337 kernel: audit: type=1325 audit(1761970687.861:292): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2554 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:07.861000 audit[2554]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff8c74d010 a2=0 a3=0 items=0 ppid=2289 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:07.888343 kernel: audit: type=1300 audit(1761970687.861:292): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff8c74d010 a2=0 a3=0 items=0 ppid=2289 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:07.861000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:07.880000 audit[2556]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2556 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:07.880000 audit[2556]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe227c9310 a2=0 a3=7ffe227c92fc items=0 ppid=2289 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:07.880000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:07.890000 audit[2556]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2556 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:07.890000 audit[2556]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe227c9310 a2=0 a3=0 items=0 ppid=2289 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:07.890000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:10.147000 audit[2559]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=2559 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:10.147000 audit[2559]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffd283c75b0 a2=0 a3=7ffd283c759c items=0 ppid=2289 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:10.147000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:10.151000 audit[2559]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2559 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:10.151000 audit[2559]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd283c75b0 a2=0 a3=0 items=0 ppid=2289 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:10.151000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:10.196000 audit[2561]: NETFILTER_CFG table=filter:95 family=2 entries=17 op=nft_register_rule pid=2561 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:10.196000 audit[2561]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd092c93d0 a2=0 a3=7ffd092c93bc items=0 ppid=2289 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:10.196000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:10.209000 audit[2561]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2561 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:10.209000 audit[2561]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd092c93d0 a2=0 a3=0 items=0 ppid=2289 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:10.209000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:11.294000 audit[2563]: NETFILTER_CFG table=filter:97 family=2 entries=19 op=nft_register_rule pid=2563 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:11.294000 audit[2563]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffee0580600 a2=0 a3=7ffee05805ec items=0 ppid=2289 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:11.294000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:11.299000 audit[2563]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2563 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:11.299000 audit[2563]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffee0580600 a2=0 a3=0 items=0 ppid=2289 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:11.299000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:12.179777 kubelet[2178]: I1101 04:18:12.179699 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn6cn\" (UniqueName: \"kubernetes.io/projected/0c28eaca-535d-4c4a-bdf8-d3c2cd230d3e-kube-api-access-nn6cn\") pod \"calico-typha-5489bc6bc7-djd7f\" (UID: \"0c28eaca-535d-4c4a-bdf8-d3c2cd230d3e\") " pod="calico-system/calico-typha-5489bc6bc7-djd7f" Nov 1 04:18:12.180398 kubelet[2178]: I1101 04:18:12.180368 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c28eaca-535d-4c4a-bdf8-d3c2cd230d3e-tigera-ca-bundle\") pod \"calico-typha-5489bc6bc7-djd7f\" (UID: \"0c28eaca-535d-4c4a-bdf8-d3c2cd230d3e\") " pod="calico-system/calico-typha-5489bc6bc7-djd7f" Nov 1 04:18:12.180549 kubelet[2178]: I1101 04:18:12.180532 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0c28eaca-535d-4c4a-bdf8-d3c2cd230d3e-typha-certs\") pod \"calico-typha-5489bc6bc7-djd7f\" (UID: \"0c28eaca-535d-4c4a-bdf8-d3c2cd230d3e\") " pod="calico-system/calico-typha-5489bc6bc7-djd7f" Nov 1 04:18:12.211000 audit[2565]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=2565 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:12.217061 kernel: kauditd_printk_skb: 25 callbacks suppressed Nov 1 04:18:12.217267 kernel: audit: type=1325 audit(1761970692.211:301): table=filter:99 family=2 entries=21 op=nft_register_rule pid=2565 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:12.211000 audit[2565]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc3a471ad0 a2=0 a3=7ffc3a471abc items=0 ppid=2289 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:12.225930 kernel: audit: type=1300 audit(1761970692.211:301): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc3a471ad0 a2=0 a3=7ffc3a471abc items=0 ppid=2289 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:12.226122 kernel: audit: type=1327 audit(1761970692.211:301): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:12.211000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:12.223000 audit[2565]: NETFILTER_CFG table=nat:100 family=2 entries=12 op=nft_register_rule pid=2565 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:12.223000 audit[2565]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc3a471ad0 a2=0 a3=0 items=0 ppid=2289 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:12.240705 kernel: audit: type=1325 audit(1761970692.223:302): table=nat:100 family=2 entries=12 op=nft_register_rule pid=2565 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:12.240782 kernel: audit: type=1300 audit(1761970692.223:302): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc3a471ad0 a2=0 a3=0 items=0 ppid=2289 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:12.223000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:12.243298 kernel: audit: type=1327 audit(1761970692.223:302): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:12.451185 env[1306]: time="2025-11-01T04:18:12.450969224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5489bc6bc7-djd7f,Uid:0c28eaca-535d-4c4a-bdf8-d3c2cd230d3e,Namespace:calico-system,Attempt:0,}" Nov 1 04:18:12.471276 env[1306]: time="2025-11-01T04:18:12.471202604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 04:18:12.471509 env[1306]: time="2025-11-01T04:18:12.471251042Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 04:18:12.471509 env[1306]: time="2025-11-01T04:18:12.471264046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 04:18:12.471657 env[1306]: time="2025-11-01T04:18:12.471573524Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f78355dff1cd4cc6e2bbcf4bf74ee0d51be80eab7b7bb5dd28fe6e34a64f87ed pid=2575 runtime=io.containerd.runc.v2 Nov 1 04:18:12.485709 kubelet[2178]: I1101 04:18:12.485679 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8a46453e-6e5a-480f-9150-24cba2ca6eb8-node-certs\") pod \"calico-node-n5nnw\" (UID: \"8a46453e-6e5a-480f-9150-24cba2ca6eb8\") " pod="calico-system/calico-node-n5nnw" Nov 1 04:18:12.486790 kubelet[2178]: I1101 04:18:12.486760 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a46453e-6e5a-480f-9150-24cba2ca6eb8-tigera-ca-bundle\") pod \"calico-node-n5nnw\" (UID: \"8a46453e-6e5a-480f-9150-24cba2ca6eb8\") " pod="calico-system/calico-node-n5nnw" Nov 1 04:18:12.486790 kubelet[2178]: I1101 04:18:12.486793 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a46453e-6e5a-480f-9150-24cba2ca6eb8-xtables-lock\") pod \"calico-node-n5nnw\" (UID: \"8a46453e-6e5a-480f-9150-24cba2ca6eb8\") " pod="calico-system/calico-node-n5nnw" Nov 1 04:18:12.486991 kubelet[2178]: I1101 04:18:12.486816 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8a46453e-6e5a-480f-9150-24cba2ca6eb8-cni-net-dir\") pod \"calico-node-n5nnw\" (UID: \"8a46453e-6e5a-480f-9150-24cba2ca6eb8\") " pod="calico-system/calico-node-n5nnw" Nov 1 04:18:12.486991 kubelet[2178]: I1101 04:18:12.486838 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8a46453e-6e5a-480f-9150-24cba2ca6eb8-policysync\") pod \"calico-node-n5nnw\" (UID: \"8a46453e-6e5a-480f-9150-24cba2ca6eb8\") " pod="calico-system/calico-node-n5nnw" Nov 1 04:18:12.486991 kubelet[2178]: I1101 04:18:12.486871 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8a46453e-6e5a-480f-9150-24cba2ca6eb8-cni-bin-dir\") pod \"calico-node-n5nnw\" (UID: \"8a46453e-6e5a-480f-9150-24cba2ca6eb8\") " pod="calico-system/calico-node-n5nnw" Nov 1 04:18:12.486991 kubelet[2178]: I1101 04:18:12.486887 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8a46453e-6e5a-480f-9150-24cba2ca6eb8-cni-log-dir\") pod \"calico-node-n5nnw\" (UID: \"8a46453e-6e5a-480f-9150-24cba2ca6eb8\") " pod="calico-system/calico-node-n5nnw" Nov 1 04:18:12.486991 kubelet[2178]: I1101 04:18:12.486910 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8a46453e-6e5a-480f-9150-24cba2ca6eb8-flexvol-driver-host\") pod \"calico-node-n5nnw\" (UID: \"8a46453e-6e5a-480f-9150-24cba2ca6eb8\") " pod="calico-system/calico-node-n5nnw" Nov 1 04:18:12.487183 kubelet[2178]: I1101 04:18:12.486928 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8a46453e-6e5a-480f-9150-24cba2ca6eb8-var-lib-calico\") pod \"calico-node-n5nnw\" (UID: \"8a46453e-6e5a-480f-9150-24cba2ca6eb8\") " pod="calico-system/calico-node-n5nnw" Nov 1 04:18:12.487183 kubelet[2178]: I1101 04:18:12.486968 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a46453e-6e5a-480f-9150-24cba2ca6eb8-lib-modules\") pod \"calico-node-n5nnw\" (UID: \"8a46453e-6e5a-480f-9150-24cba2ca6eb8\") " pod="calico-system/calico-node-n5nnw" Nov 1 04:18:12.487183 kubelet[2178]: I1101 04:18:12.486987 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8a46453e-6e5a-480f-9150-24cba2ca6eb8-var-run-calico\") pod \"calico-node-n5nnw\" (UID: \"8a46453e-6e5a-480f-9150-24cba2ca6eb8\") " pod="calico-system/calico-node-n5nnw" Nov 1 04:18:12.487183 kubelet[2178]: I1101 04:18:12.487003 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfsr8\" (UniqueName: \"kubernetes.io/projected/8a46453e-6e5a-480f-9150-24cba2ca6eb8-kube-api-access-zfsr8\") pod \"calico-node-n5nnw\" (UID: \"8a46453e-6e5a-480f-9150-24cba2ca6eb8\") " pod="calico-system/calico-node-n5nnw" Nov 1 04:18:12.568716 env[1306]: time="2025-11-01T04:18:12.567018983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5489bc6bc7-djd7f,Uid:0c28eaca-535d-4c4a-bdf8-d3c2cd230d3e,Namespace:calico-system,Attempt:0,} returns sandbox id \"f78355dff1cd4cc6e2bbcf4bf74ee0d51be80eab7b7bb5dd28fe6e34a64f87ed\"" Nov 1 04:18:12.572751 kubelet[2178]: E1101 04:18:12.572581 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4ccfj" podUID="f7f98be6-9d36-4c44-bedf-cd179c76bbfe" Nov 1 04:18:12.574152 env[1306]: time="2025-11-01T04:18:12.574115970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 04:18:12.595180 kubelet[2178]: E1101 04:18:12.595115 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.595483 kubelet[2178]: W1101 04:18:12.595450 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.597774 kubelet[2178]: E1101 04:18:12.597741 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.597966 kubelet[2178]: W1101 04:18:12.597939 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.600735 kubelet[2178]: E1101 04:18:12.600696 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.602420 kubelet[2178]: E1101 04:18:12.602391 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.602621 kubelet[2178]: W1101 04:18:12.602595 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.602760 kubelet[2178]: E1101 04:18:12.602738 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.603108 kubelet[2178]: E1101 04:18:12.603088 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.604270 kubelet[2178]: E1101 04:18:12.603339 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.604411 kubelet[2178]: W1101 04:18:12.604395 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.604570 kubelet[2178]: E1101 04:18:12.604528 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.604906 kubelet[2178]: E1101 04:18:12.604892 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.604990 kubelet[2178]: W1101 04:18:12.604977 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.605142 kubelet[2178]: E1101 04:18:12.605128 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.605489 kubelet[2178]: E1101 04:18:12.605477 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.605577 kubelet[2178]: W1101 04:18:12.605564 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.605703 kubelet[2178]: E1101 04:18:12.605663 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.605976 kubelet[2178]: E1101 04:18:12.605965 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.606049 kubelet[2178]: W1101 04:18:12.606037 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.606184 kubelet[2178]: E1101 04:18:12.606172 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.606522 kubelet[2178]: E1101 04:18:12.606510 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.606621 kubelet[2178]: W1101 04:18:12.606609 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.606785 kubelet[2178]: E1101 04:18:12.606759 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.606975 kubelet[2178]: E1101 04:18:12.606963 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.607045 kubelet[2178]: W1101 04:18:12.607033 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.607196 kubelet[2178]: E1101 04:18:12.607176 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.607351 kubelet[2178]: E1101 04:18:12.607341 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.607441 kubelet[2178]: W1101 04:18:12.607429 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.607532 kubelet[2178]: E1101 04:18:12.607511 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.607764 kubelet[2178]: E1101 04:18:12.607754 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.607832 kubelet[2178]: W1101 04:18:12.607821 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.607921 kubelet[2178]: E1101 04:18:12.607904 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.608163 kubelet[2178]: E1101 04:18:12.608154 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.608236 kubelet[2178]: W1101 04:18:12.608224 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.608407 kubelet[2178]: E1101 04:18:12.608346 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.608689 kubelet[2178]: E1101 04:18:12.608678 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.608777 kubelet[2178]: W1101 04:18:12.608765 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.608917 kubelet[2178]: E1101 04:18:12.608894 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.609103 kubelet[2178]: E1101 04:18:12.609093 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.609167 kubelet[2178]: W1101 04:18:12.609156 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.609255 kubelet[2178]: E1101 04:18:12.609236 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.609518 kubelet[2178]: E1101 04:18:12.609508 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.609600 kubelet[2178]: W1101 04:18:12.609587 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.609687 kubelet[2178]: E1101 04:18:12.609670 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.610047 kubelet[2178]: E1101 04:18:12.610037 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.610122 kubelet[2178]: W1101 04:18:12.610110 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.610220 kubelet[2178]: E1101 04:18:12.610201 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.610534 kubelet[2178]: E1101 04:18:12.610522 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.610611 kubelet[2178]: W1101 04:18:12.610599 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.610694 kubelet[2178]: E1101 04:18:12.610676 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.610972 kubelet[2178]: E1101 04:18:12.610962 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.611062 kubelet[2178]: W1101 04:18:12.611050 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.611146 kubelet[2178]: E1101 04:18:12.611130 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.611424 kubelet[2178]: E1101 04:18:12.611413 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.611506 kubelet[2178]: W1101 04:18:12.611494 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.611609 kubelet[2178]: E1101 04:18:12.611590 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.611823 kubelet[2178]: E1101 04:18:12.611813 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.611892 kubelet[2178]: W1101 04:18:12.611881 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.612000 kubelet[2178]: E1101 04:18:12.611982 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.612211 kubelet[2178]: E1101 04:18:12.612201 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.612280 kubelet[2178]: W1101 04:18:12.612268 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.612399 kubelet[2178]: E1101 04:18:12.612378 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.612790 kubelet[2178]: E1101 04:18:12.612778 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.612867 kubelet[2178]: W1101 04:18:12.612855 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.612992 kubelet[2178]: E1101 04:18:12.612960 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.613712 kubelet[2178]: E1101 04:18:12.613694 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.623250 kubelet[2178]: W1101 04:18:12.623222 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.623909 kubelet[2178]: E1101 04:18:12.623895 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.624008 kubelet[2178]: W1101 04:18:12.623995 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.625613 kubelet[2178]: E1101 04:18:12.625589 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.625755 kubelet[2178]: E1101 04:18:12.625741 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.625973 kubelet[2178]: E1101 04:18:12.625860 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.626030 kubelet[2178]: W1101 04:18:12.625984 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.626867 kubelet[2178]: E1101 04:18:12.626848 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.629444 kubelet[2178]: E1101 04:18:12.629135 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.629444 kubelet[2178]: W1101 04:18:12.629154 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.629739 kubelet[2178]: E1101 04:18:12.629698 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.629739 kubelet[2178]: W1101 04:18:12.629711 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.630832 kubelet[2178]: E1101 04:18:12.630793 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.631076 kubelet[2178]: E1101 04:18:12.631046 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.633972 kubelet[2178]: E1101 04:18:12.633938 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.634134 kubelet[2178]: W1101 04:18:12.634113 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.634428 kubelet[2178]: E1101 04:18:12.634407 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.634930 kubelet[2178]: E1101 04:18:12.634914 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.635059 kubelet[2178]: W1101 04:18:12.635040 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.635725 kubelet[2178]: E1101 04:18:12.635707 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.635847 kubelet[2178]: W1101 04:18:12.635828 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.635968 kubelet[2178]: E1101 04:18:12.635946 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.636088 kubelet[2178]: E1101 04:18:12.636062 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.641573 kubelet[2178]: E1101 04:18:12.641551 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.641734 kubelet[2178]: W1101 04:18:12.641712 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.641848 kubelet[2178]: E1101 04:18:12.641830 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.668591 kubelet[2178]: E1101 04:18:12.668554 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.668844 kubelet[2178]: W1101 04:18:12.668822 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.668927 kubelet[2178]: E1101 04:18:12.668913 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.669260 kubelet[2178]: E1101 04:18:12.669247 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.669389 kubelet[2178]: W1101 04:18:12.669375 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.669475 kubelet[2178]: E1101 04:18:12.669464 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.669899 kubelet[2178]: E1101 04:18:12.669887 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.670078 kubelet[2178]: W1101 04:18:12.670064 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.670150 kubelet[2178]: E1101 04:18:12.670139 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.670577 kubelet[2178]: E1101 04:18:12.670566 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.670669 kubelet[2178]: W1101 04:18:12.670656 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.670747 kubelet[2178]: E1101 04:18:12.670736 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.671063 kubelet[2178]: E1101 04:18:12.671053 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.671145 kubelet[2178]: W1101 04:18:12.671132 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.671216 kubelet[2178]: E1101 04:18:12.671205 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.671554 kubelet[2178]: E1101 04:18:12.671544 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.671644 kubelet[2178]: W1101 04:18:12.671632 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.671717 kubelet[2178]: E1101 04:18:12.671705 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.671998 kubelet[2178]: E1101 04:18:12.671987 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.672082 kubelet[2178]: W1101 04:18:12.672070 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.672152 kubelet[2178]: E1101 04:18:12.672141 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.672444 kubelet[2178]: E1101 04:18:12.672435 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.672520 kubelet[2178]: W1101 04:18:12.672509 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.672596 kubelet[2178]: E1101 04:18:12.672585 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.672888 kubelet[2178]: E1101 04:18:12.672879 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.672968 kubelet[2178]: W1101 04:18:12.672957 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.673040 kubelet[2178]: E1101 04:18:12.673029 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.673310 kubelet[2178]: E1101 04:18:12.673300 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.673420 kubelet[2178]: W1101 04:18:12.673407 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.673500 kubelet[2178]: E1101 04:18:12.673488 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.673842 kubelet[2178]: E1101 04:18:12.673823 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.673937 kubelet[2178]: W1101 04:18:12.673925 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.674011 kubelet[2178]: E1101 04:18:12.674000 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.674286 kubelet[2178]: E1101 04:18:12.674276 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.674412 kubelet[2178]: W1101 04:18:12.674400 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.674487 kubelet[2178]: E1101 04:18:12.674476 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.675241 kubelet[2178]: E1101 04:18:12.675228 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.675423 kubelet[2178]: W1101 04:18:12.675375 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.675512 kubelet[2178]: E1101 04:18:12.675500 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.675828 kubelet[2178]: E1101 04:18:12.675816 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.675915 kubelet[2178]: W1101 04:18:12.675904 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.675981 kubelet[2178]: E1101 04:18:12.675970 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.676193 kubelet[2178]: E1101 04:18:12.676185 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.676267 kubelet[2178]: W1101 04:18:12.676256 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.676371 kubelet[2178]: E1101 04:18:12.676350 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.676584 kubelet[2178]: E1101 04:18:12.676575 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.676665 kubelet[2178]: W1101 04:18:12.676653 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.676731 kubelet[2178]: E1101 04:18:12.676720 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.676991 kubelet[2178]: E1101 04:18:12.676982 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.677072 kubelet[2178]: W1101 04:18:12.677060 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.677143 kubelet[2178]: E1101 04:18:12.677133 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.677422 kubelet[2178]: E1101 04:18:12.677412 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.677501 kubelet[2178]: W1101 04:18:12.677489 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.677564 kubelet[2178]: E1101 04:18:12.677553 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.677784 kubelet[2178]: E1101 04:18:12.677775 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.677855 kubelet[2178]: W1101 04:18:12.677843 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.677935 kubelet[2178]: E1101 04:18:12.677924 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.678216 kubelet[2178]: E1101 04:18:12.678207 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.678292 kubelet[2178]: W1101 04:18:12.678281 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.678548 kubelet[2178]: E1101 04:18:12.678521 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.680831 env[1306]: time="2025-11-01T04:18:12.679530276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n5nnw,Uid:8a46453e-6e5a-480f-9150-24cba2ca6eb8,Namespace:calico-system,Attempt:0,}" Nov 1 04:18:12.696370 kubelet[2178]: E1101 04:18:12.696003 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.696370 kubelet[2178]: W1101 04:18:12.696064 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.696370 kubelet[2178]: E1101 04:18:12.696130 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.696370 kubelet[2178]: I1101 04:18:12.696175 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f7f98be6-9d36-4c44-bedf-cd179c76bbfe-varrun\") pod \"csi-node-driver-4ccfj\" (UID: \"f7f98be6-9d36-4c44-bedf-cd179c76bbfe\") " pod="calico-system/csi-node-driver-4ccfj" Nov 1 04:18:12.707664 kubelet[2178]: E1101 04:18:12.700199 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.707664 kubelet[2178]: W1101 04:18:12.700227 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.707664 kubelet[2178]: E1101 04:18:12.700273 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.707664 kubelet[2178]: I1101 04:18:12.700373 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f7f98be6-9d36-4c44-bedf-cd179c76bbfe-socket-dir\") pod \"csi-node-driver-4ccfj\" (UID: \"f7f98be6-9d36-4c44-bedf-cd179c76bbfe\") " pod="calico-system/csi-node-driver-4ccfj" Nov 1 04:18:12.707664 kubelet[2178]: E1101 04:18:12.703916 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.707664 kubelet[2178]: W1101 04:18:12.703946 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.707664 kubelet[2178]: E1101 04:18:12.703966 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.707664 kubelet[2178]: I1101 04:18:12.703993 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f7f98be6-9d36-4c44-bedf-cd179c76bbfe-kubelet-dir\") pod \"csi-node-driver-4ccfj\" (UID: \"f7f98be6-9d36-4c44-bedf-cd179c76bbfe\") " pod="calico-system/csi-node-driver-4ccfj" Nov 1 04:18:12.707664 kubelet[2178]: E1101 04:18:12.707211 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.708073 kubelet[2178]: W1101 04:18:12.707226 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.708073 kubelet[2178]: E1101 04:18:12.707258 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.708073 kubelet[2178]: I1101 04:18:12.707285 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwflm\" (UniqueName: \"kubernetes.io/projected/f7f98be6-9d36-4c44-bedf-cd179c76bbfe-kube-api-access-fwflm\") pod \"csi-node-driver-4ccfj\" (UID: \"f7f98be6-9d36-4c44-bedf-cd179c76bbfe\") " pod="calico-system/csi-node-driver-4ccfj" Nov 1 04:18:12.708073 kubelet[2178]: E1101 04:18:12.707525 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.708073 kubelet[2178]: W1101 04:18:12.707537 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.708073 kubelet[2178]: E1101 04:18:12.707547 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.708073 kubelet[2178]: I1101 04:18:12.707568 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f7f98be6-9d36-4c44-bedf-cd179c76bbfe-registration-dir\") pod \"csi-node-driver-4ccfj\" (UID: \"f7f98be6-9d36-4c44-bedf-cd179c76bbfe\") " pod="calico-system/csi-node-driver-4ccfj" Nov 1 04:18:12.708073 kubelet[2178]: E1101 04:18:12.707769 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.708329 kubelet[2178]: W1101 04:18:12.707778 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.708329 kubelet[2178]: E1101 04:18:12.707788 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.708329 kubelet[2178]: E1101 04:18:12.707988 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.708329 kubelet[2178]: W1101 04:18:12.707995 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.708329 kubelet[2178]: E1101 04:18:12.708004 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.708329 kubelet[2178]: E1101 04:18:12.708173 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.708329 kubelet[2178]: W1101 04:18:12.708180 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.708329 kubelet[2178]: E1101 04:18:12.708191 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.708574 kubelet[2178]: E1101 04:18:12.708344 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.708574 kubelet[2178]: W1101 04:18:12.708352 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.708574 kubelet[2178]: E1101 04:18:12.708370 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.708574 kubelet[2178]: E1101 04:18:12.708511 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.708574 kubelet[2178]: W1101 04:18:12.708518 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.708574 kubelet[2178]: E1101 04:18:12.708526 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.708755 kubelet[2178]: E1101 04:18:12.708656 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.708755 kubelet[2178]: W1101 04:18:12.708663 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.708755 kubelet[2178]: E1101 04:18:12.708670 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.708848 kubelet[2178]: E1101 04:18:12.708822 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.708848 kubelet[2178]: W1101 04:18:12.708830 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.708848 kubelet[2178]: E1101 04:18:12.708839 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.714576 kubelet[2178]: E1101 04:18:12.709051 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.714576 kubelet[2178]: W1101 04:18:12.709066 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.714576 kubelet[2178]: E1101 04:18:12.709077 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.714576 kubelet[2178]: E1101 04:18:12.709288 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.714576 kubelet[2178]: W1101 04:18:12.709296 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.714576 kubelet[2178]: E1101 04:18:12.709362 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.714576 kubelet[2178]: E1101 04:18:12.709546 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.714576 kubelet[2178]: W1101 04:18:12.709554 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.714576 kubelet[2178]: E1101 04:18:12.709563 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.743002 env[1306]: time="2025-11-01T04:18:12.742919172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 04:18:12.743180 env[1306]: time="2025-11-01T04:18:12.743018829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 04:18:12.743180 env[1306]: time="2025-11-01T04:18:12.743048992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 04:18:12.744039 env[1306]: time="2025-11-01T04:18:12.743798550Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b97b2323ed7a10404197694d4392512e19d77ef41500db8592aebeff02e59d2b pid=2684 runtime=io.containerd.runc.v2 Nov 1 04:18:12.808375 kubelet[2178]: E1101 04:18:12.808240 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.808375 kubelet[2178]: W1101 04:18:12.808261 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.808375 kubelet[2178]: E1101 04:18:12.808282 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.809026 kubelet[2178]: E1101 04:18:12.808868 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.809026 kubelet[2178]: W1101 04:18:12.808880 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.809026 kubelet[2178]: E1101 04:18:12.808893 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.809383 kubelet[2178]: E1101 04:18:12.809224 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.809383 kubelet[2178]: W1101 04:18:12.809234 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.809383 kubelet[2178]: E1101 04:18:12.809245 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.809718 kubelet[2178]: E1101 04:18:12.809579 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.809718 kubelet[2178]: W1101 04:18:12.809589 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.809718 kubelet[2178]: E1101 04:18:12.809600 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.810144 kubelet[2178]: E1101 04:18:12.809988 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.810144 kubelet[2178]: W1101 04:18:12.809999 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.810144 kubelet[2178]: E1101 04:18:12.810019 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.810498 kubelet[2178]: E1101 04:18:12.810338 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.810498 kubelet[2178]: W1101 04:18:12.810360 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.810498 kubelet[2178]: E1101 04:18:12.810384 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.810798 kubelet[2178]: E1101 04:18:12.810680 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.810798 kubelet[2178]: W1101 04:18:12.810690 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.810798 kubelet[2178]: E1101 04:18:12.810777 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.811096 kubelet[2178]: E1101 04:18:12.810961 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.811096 kubelet[2178]: W1101 04:18:12.810970 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.811096 kubelet[2178]: E1101 04:18:12.811071 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.811417 kubelet[2178]: E1101 04:18:12.811264 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.811417 kubelet[2178]: W1101 04:18:12.811273 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.811417 kubelet[2178]: E1101 04:18:12.811394 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.811714 kubelet[2178]: E1101 04:18:12.811597 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.811714 kubelet[2178]: W1101 04:18:12.811607 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.811714 kubelet[2178]: E1101 04:18:12.811693 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.812010 kubelet[2178]: E1101 04:18:12.811893 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.812010 kubelet[2178]: W1101 04:18:12.811902 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.812010 kubelet[2178]: E1101 04:18:12.811990 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.812308 kubelet[2178]: E1101 04:18:12.812175 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.812308 kubelet[2178]: W1101 04:18:12.812184 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.812308 kubelet[2178]: E1101 04:18:12.812268 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.812754 kubelet[2178]: E1101 04:18:12.812614 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.812754 kubelet[2178]: W1101 04:18:12.812625 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.812754 kubelet[2178]: E1101 04:18:12.812660 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.813527 kubelet[2178]: E1101 04:18:12.813383 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.813527 kubelet[2178]: W1101 04:18:12.813395 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.813527 kubelet[2178]: E1101 04:18:12.813456 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.813852 kubelet[2178]: E1101 04:18:12.813727 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.813852 kubelet[2178]: W1101 04:18:12.813738 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.813852 kubelet[2178]: E1101 04:18:12.813768 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.814250 kubelet[2178]: E1101 04:18:12.814241 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.814359 kubelet[2178]: W1101 04:18:12.814334 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.814512 kubelet[2178]: E1101 04:18:12.814502 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.814769 kubelet[2178]: E1101 04:18:12.814759 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.814855 kubelet[2178]: W1101 04:18:12.814837 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.815030 kubelet[2178]: E1101 04:18:12.815019 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.815291 kubelet[2178]: E1101 04:18:12.815282 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.815452 kubelet[2178]: W1101 04:18:12.815439 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.816498 kubelet[2178]: E1101 04:18:12.816484 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.816645 kubelet[2178]: E1101 04:18:12.816635 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.816806 kubelet[2178]: W1101 04:18:12.816793 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.816955 kubelet[2178]: E1101 04:18:12.816945 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.817581 kubelet[2178]: E1101 04:18:12.817567 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.817679 kubelet[2178]: W1101 04:18:12.817666 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.817831 kubelet[2178]: E1101 04:18:12.817820 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.818494 kubelet[2178]: E1101 04:18:12.818481 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.818595 kubelet[2178]: W1101 04:18:12.818583 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.818786 kubelet[2178]: E1101 04:18:12.818774 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.819013 kubelet[2178]: E1101 04:18:12.819001 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.819086 kubelet[2178]: W1101 04:18:12.819074 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.819438 kubelet[2178]: E1101 04:18:12.819416 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.820811 kubelet[2178]: E1101 04:18:12.820797 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.820915 kubelet[2178]: W1101 04:18:12.820901 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.821127 kubelet[2178]: E1101 04:18:12.821113 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.826789 kubelet[2178]: E1101 04:18:12.826774 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.828803 kubelet[2178]: W1101 04:18:12.828786 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.829190 kubelet[2178]: E1101 04:18:12.829177 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.829982 kubelet[2178]: W1101 04:18:12.829951 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.830118 kubelet[2178]: E1101 04:18:12.830104 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.830209 kubelet[2178]: E1101 04:18:12.830197 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.839498 kubelet[2178]: E1101 04:18:12.839483 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:12.839622 kubelet[2178]: W1101 04:18:12.839608 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:12.839690 kubelet[2178]: E1101 04:18:12.839678 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:12.844059 env[1306]: time="2025-11-01T04:18:12.844016563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n5nnw,Uid:8a46453e-6e5a-480f-9150-24cba2ca6eb8,Namespace:calico-system,Attempt:0,} returns sandbox id \"b97b2323ed7a10404197694d4392512e19d77ef41500db8592aebeff02e59d2b\"" Nov 1 04:18:13.296395 systemd[1]: run-containerd-runc-k8s.io-f78355dff1cd4cc6e2bbcf4bf74ee0d51be80eab7b7bb5dd28fe6e34a64f87ed-runc.gWNWOc.mount: Deactivated successfully. Nov 1 04:18:13.837092 kubelet[2178]: E1101 04:18:13.837027 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4ccfj" podUID="f7f98be6-9d36-4c44-bedf-cd179c76bbfe" Nov 1 04:18:14.029262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2524199735.mount: Deactivated successfully. Nov 1 04:18:15.704474 env[1306]: time="2025-11-01T04:18:15.704393120Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:18:15.706362 env[1306]: time="2025-11-01T04:18:15.706084565Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:18:15.707727 env[1306]: time="2025-11-01T04:18:15.707677459Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:18:15.709437 env[1306]: time="2025-11-01T04:18:15.709341991Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:18:15.710817 env[1306]: time="2025-11-01T04:18:15.709891132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 04:18:15.716992 env[1306]: time="2025-11-01T04:18:15.713950113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 04:18:15.737455 env[1306]: time="2025-11-01T04:18:15.737412214Z" level=info msg="CreateContainer within sandbox \"f78355dff1cd4cc6e2bbcf4bf74ee0d51be80eab7b7bb5dd28fe6e34a64f87ed\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 04:18:15.752973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2597930673.mount: Deactivated successfully. Nov 1 04:18:15.755026 env[1306]: time="2025-11-01T04:18:15.754988965Z" level=info msg="CreateContainer within sandbox \"f78355dff1cd4cc6e2bbcf4bf74ee0d51be80eab7b7bb5dd28fe6e34a64f87ed\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c61ac978b7d5d677e00ad25b039ee1dd3f52723af885651712ad15bb6af3445f\"" Nov 1 04:18:15.756905 env[1306]: time="2025-11-01T04:18:15.756868250Z" level=info msg="StartContainer for \"c61ac978b7d5d677e00ad25b039ee1dd3f52723af885651712ad15bb6af3445f\"" Nov 1 04:18:15.836546 kubelet[2178]: E1101 04:18:15.836495 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4ccfj" podUID="f7f98be6-9d36-4c44-bedf-cd179c76bbfe" Nov 1 04:18:15.868023 env[1306]: time="2025-11-01T04:18:15.867969738Z" level=info msg="StartContainer for \"c61ac978b7d5d677e00ad25b039ee1dd3f52723af885651712ad15bb6af3445f\" returns successfully" Nov 1 04:18:15.958664 kubelet[2178]: I1101 04:18:15.958462 2178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5489bc6bc7-djd7f" podStartSLOduration=0.821109131 podStartE2EDuration="3.958387241s" podCreationTimestamp="2025-11-01 04:18:12 +0000 UTC" firstStartedPulling="2025-11-01 04:18:12.573781295 +0000 UTC m=+20.034684007" lastFinishedPulling="2025-11-01 04:18:15.71105942 +0000 UTC m=+23.171962117" observedRunningTime="2025-11-01 04:18:15.956016341 +0000 UTC m=+23.416919123" watchObservedRunningTime="2025-11-01 04:18:15.958387241 +0000 UTC m=+23.419290050" Nov 1 04:18:16.008181 kubelet[2178]: E1101 04:18:16.008131 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.008181 kubelet[2178]: W1101 04:18:16.008170 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.008413 kubelet[2178]: E1101 04:18:16.008195 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.008601 kubelet[2178]: E1101 04:18:16.008584 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.008661 kubelet[2178]: W1101 04:18:16.008601 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.008661 kubelet[2178]: E1101 04:18:16.008615 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.008845 kubelet[2178]: E1101 04:18:16.008830 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.008845 kubelet[2178]: W1101 04:18:16.008843 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.008934 kubelet[2178]: E1101 04:18:16.008861 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.009086 kubelet[2178]: E1101 04:18:16.009073 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.009456 kubelet[2178]: W1101 04:18:16.009086 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.009456 kubelet[2178]: E1101 04:18:16.009098 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.009456 kubelet[2178]: E1101 04:18:16.009315 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.009456 kubelet[2178]: W1101 04:18:16.009370 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.009456 kubelet[2178]: E1101 04:18:16.009381 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.009762 kubelet[2178]: E1101 04:18:16.009746 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.009762 kubelet[2178]: W1101 04:18:16.009760 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.010025 kubelet[2178]: E1101 04:18:16.009772 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.010025 kubelet[2178]: E1101 04:18:16.009957 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.010025 kubelet[2178]: W1101 04:18:16.009966 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.010025 kubelet[2178]: E1101 04:18:16.009975 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.011407 kubelet[2178]: E1101 04:18:16.011384 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.011407 kubelet[2178]: W1101 04:18:16.011403 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.011518 kubelet[2178]: E1101 04:18:16.011415 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.011671 kubelet[2178]: E1101 04:18:16.011657 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.011671 kubelet[2178]: W1101 04:18:16.011671 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.011748 kubelet[2178]: E1101 04:18:16.011681 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.013404 kubelet[2178]: E1101 04:18:16.013381 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.013404 kubelet[2178]: W1101 04:18:16.013398 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.013528 kubelet[2178]: E1101 04:18:16.013419 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.013697 kubelet[2178]: E1101 04:18:16.013677 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.013697 kubelet[2178]: W1101 04:18:16.013690 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.013778 kubelet[2178]: E1101 04:18:16.013701 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.014414 kubelet[2178]: E1101 04:18:16.014386 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.014414 kubelet[2178]: W1101 04:18:16.014408 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.014530 kubelet[2178]: E1101 04:18:16.014420 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.014691 kubelet[2178]: E1101 04:18:16.014678 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.014691 kubelet[2178]: W1101 04:18:16.014691 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.014772 kubelet[2178]: E1101 04:18:16.014701 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.015666 kubelet[2178]: E1101 04:18:16.015645 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.015666 kubelet[2178]: W1101 04:18:16.015661 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.015797 kubelet[2178]: E1101 04:18:16.015683 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.015983 kubelet[2178]: E1101 04:18:16.015967 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.015983 kubelet[2178]: W1101 04:18:16.015982 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.016069 kubelet[2178]: E1101 04:18:16.015995 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.039099 kubelet[2178]: E1101 04:18:16.038763 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.039099 kubelet[2178]: W1101 04:18:16.038780 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.039099 kubelet[2178]: E1101 04:18:16.038799 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.039099 kubelet[2178]: E1101 04:18:16.038996 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.039099 kubelet[2178]: W1101 04:18:16.039003 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.039099 kubelet[2178]: E1101 04:18:16.039015 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.039714 kubelet[2178]: E1101 04:18:16.039373 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.039714 kubelet[2178]: W1101 04:18:16.039382 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.039714 kubelet[2178]: E1101 04:18:16.039402 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.039965 kubelet[2178]: E1101 04:18:16.039859 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.039965 kubelet[2178]: W1101 04:18:16.039869 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.039965 kubelet[2178]: E1101 04:18:16.039885 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.040425 kubelet[2178]: E1101 04:18:16.040122 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.040425 kubelet[2178]: W1101 04:18:16.040131 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.040425 kubelet[2178]: E1101 04:18:16.040185 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.040425 kubelet[2178]: E1101 04:18:16.040349 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.040425 kubelet[2178]: W1101 04:18:16.040358 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.040425 kubelet[2178]: E1101 04:18:16.040374 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.040866 kubelet[2178]: E1101 04:18:16.040759 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.040866 kubelet[2178]: W1101 04:18:16.040778 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.040866 kubelet[2178]: E1101 04:18:16.040807 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.041187 kubelet[2178]: E1101 04:18:16.041031 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.041187 kubelet[2178]: W1101 04:18:16.041041 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.041187 kubelet[2178]: E1101 04:18:16.041056 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.041708 kubelet[2178]: E1101 04:18:16.041375 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.041708 kubelet[2178]: W1101 04:18:16.041385 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.041708 kubelet[2178]: E1101 04:18:16.041400 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.042397 kubelet[2178]: E1101 04:18:16.042384 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.042523 kubelet[2178]: W1101 04:18:16.042483 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.042523 kubelet[2178]: E1101 04:18:16.042506 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.043468 kubelet[2178]: E1101 04:18:16.043450 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.043575 kubelet[2178]: W1101 04:18:16.043477 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.043575 kubelet[2178]: E1101 04:18:16.043496 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.045409 kubelet[2178]: E1101 04:18:16.045391 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.045409 kubelet[2178]: W1101 04:18:16.045406 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.045684 kubelet[2178]: E1101 04:18:16.045551 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.045869 kubelet[2178]: E1101 04:18:16.045854 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.045932 kubelet[2178]: W1101 04:18:16.045868 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.046105 kubelet[2178]: E1101 04:18:16.045983 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.046266 kubelet[2178]: E1101 04:18:16.046251 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.046266 kubelet[2178]: W1101 04:18:16.046265 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.046554 kubelet[2178]: E1101 04:18:16.046420 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.046736 kubelet[2178]: E1101 04:18:16.046721 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.046736 kubelet[2178]: W1101 04:18:16.046735 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.046822 kubelet[2178]: E1101 04:18:16.046773 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.047070 kubelet[2178]: E1101 04:18:16.047058 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.047547 kubelet[2178]: W1101 04:18:16.047162 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.047547 kubelet[2178]: E1101 04:18:16.047187 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.048231 kubelet[2178]: E1101 04:18:16.048207 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.048231 kubelet[2178]: W1101 04:18:16.048227 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.048363 kubelet[2178]: E1101 04:18:16.048241 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.048766 kubelet[2178]: E1101 04:18:16.048746 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:16.048852 kubelet[2178]: W1101 04:18:16.048840 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:16.048922 kubelet[2178]: E1101 04:18:16.048912 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:16.942700 kubelet[2178]: I1101 04:18:16.942655 2178 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 04:18:17.025251 kubelet[2178]: E1101 04:18:17.024187 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.025251 kubelet[2178]: W1101 04:18:17.024227 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.025251 kubelet[2178]: E1101 04:18:17.024272 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.025251 kubelet[2178]: E1101 04:18:17.024568 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.025251 kubelet[2178]: W1101 04:18:17.024577 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.025251 kubelet[2178]: E1101 04:18:17.024587 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.025251 kubelet[2178]: E1101 04:18:17.024757 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.025251 kubelet[2178]: W1101 04:18:17.024765 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.025251 kubelet[2178]: E1101 04:18:17.024773 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.025251 kubelet[2178]: E1101 04:18:17.024978 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.025966 kubelet[2178]: W1101 04:18:17.024986 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.025966 kubelet[2178]: E1101 04:18:17.024994 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.025966 kubelet[2178]: E1101 04:18:17.025172 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.025966 kubelet[2178]: W1101 04:18:17.025179 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.025966 kubelet[2178]: E1101 04:18:17.025187 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.026847 kubelet[2178]: E1101 04:18:17.026210 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.026847 kubelet[2178]: W1101 04:18:17.026223 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.026847 kubelet[2178]: E1101 04:18:17.026235 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.026847 kubelet[2178]: E1101 04:18:17.026427 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.026847 kubelet[2178]: W1101 04:18:17.026434 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.026847 kubelet[2178]: E1101 04:18:17.026443 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.026847 kubelet[2178]: E1101 04:18:17.026582 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.026847 kubelet[2178]: W1101 04:18:17.026588 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.026847 kubelet[2178]: E1101 04:18:17.026596 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.026847 kubelet[2178]: E1101 04:18:17.026774 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.027776 kubelet[2178]: W1101 04:18:17.026782 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.027776 kubelet[2178]: E1101 04:18:17.026791 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.027776 kubelet[2178]: E1101 04:18:17.027409 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.027776 kubelet[2178]: W1101 04:18:17.027419 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.027776 kubelet[2178]: E1101 04:18:17.027431 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.027776 kubelet[2178]: E1101 04:18:17.027634 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.027776 kubelet[2178]: W1101 04:18:17.027641 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.027776 kubelet[2178]: E1101 04:18:17.027651 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.028302 kubelet[2178]: E1101 04:18:17.028152 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.028302 kubelet[2178]: W1101 04:18:17.028163 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.028302 kubelet[2178]: E1101 04:18:17.028173 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.028707 kubelet[2178]: E1101 04:18:17.028581 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.028707 kubelet[2178]: W1101 04:18:17.028593 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.028707 kubelet[2178]: E1101 04:18:17.028614 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.029008 kubelet[2178]: E1101 04:18:17.028921 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.029008 kubelet[2178]: W1101 04:18:17.028931 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.029008 kubelet[2178]: E1101 04:18:17.028941 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.029301 kubelet[2178]: E1101 04:18:17.029291 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.029607 kubelet[2178]: W1101 04:18:17.029522 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.029607 kubelet[2178]: E1101 04:18:17.029542 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.046939 kubelet[2178]: E1101 04:18:17.046463 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.046939 kubelet[2178]: W1101 04:18:17.046501 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.046939 kubelet[2178]: E1101 04:18:17.046616 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.047532 kubelet[2178]: E1101 04:18:17.047358 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.047532 kubelet[2178]: W1101 04:18:17.047371 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.047532 kubelet[2178]: E1101 04:18:17.047394 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.047978 kubelet[2178]: E1101 04:18:17.047790 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.047978 kubelet[2178]: W1101 04:18:17.047805 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.047978 kubelet[2178]: E1101 04:18:17.047824 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.048699 kubelet[2178]: E1101 04:18:17.048286 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.048699 kubelet[2178]: W1101 04:18:17.048297 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.048699 kubelet[2178]: E1101 04:18:17.048361 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.048699 kubelet[2178]: E1101 04:18:17.048570 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.048699 kubelet[2178]: W1101 04:18:17.048579 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.048699 kubelet[2178]: E1101 04:18:17.048665 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.049183 kubelet[2178]: E1101 04:18:17.049041 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.049183 kubelet[2178]: W1101 04:18:17.049051 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.049183 kubelet[2178]: E1101 04:18:17.049103 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.049923 kubelet[2178]: E1101 04:18:17.049478 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.049923 kubelet[2178]: W1101 04:18:17.049489 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.049923 kubelet[2178]: E1101 04:18:17.049509 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.049923 kubelet[2178]: E1101 04:18:17.049763 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.049923 kubelet[2178]: W1101 04:18:17.049772 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.049923 kubelet[2178]: E1101 04:18:17.049791 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.050399 kubelet[2178]: E1101 04:18:17.050387 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.050610 kubelet[2178]: W1101 04:18:17.050475 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.050610 kubelet[2178]: E1101 04:18:17.050569 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.050863 kubelet[2178]: E1101 04:18:17.050851 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.050964 kubelet[2178]: W1101 04:18:17.050931 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.051132 kubelet[2178]: E1101 04:18:17.051119 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.051311 kubelet[2178]: E1101 04:18:17.051302 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.051405 kubelet[2178]: W1101 04:18:17.051393 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.051502 kubelet[2178]: E1101 04:18:17.051479 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.051736 kubelet[2178]: E1101 04:18:17.051725 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.051815 kubelet[2178]: W1101 04:18:17.051802 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.051927 kubelet[2178]: E1101 04:18:17.051914 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.052305 kubelet[2178]: E1101 04:18:17.052271 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.052305 kubelet[2178]: W1101 04:18:17.052289 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.052305 kubelet[2178]: E1101 04:18:17.052312 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.053003 kubelet[2178]: E1101 04:18:17.052784 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.053003 kubelet[2178]: W1101 04:18:17.052795 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.053114 kubelet[2178]: E1101 04:18:17.053094 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.053552 kubelet[2178]: E1101 04:18:17.053518 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.053552 kubelet[2178]: W1101 04:18:17.053532 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.053552 kubelet[2178]: E1101 04:18:17.053551 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.053916 kubelet[2178]: E1101 04:18:17.053905 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.053916 kubelet[2178]: W1101 04:18:17.053915 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.054029 kubelet[2178]: E1101 04:18:17.053928 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.054506 kubelet[2178]: E1101 04:18:17.054491 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.054589 kubelet[2178]: W1101 04:18:17.054576 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.054677 kubelet[2178]: E1101 04:18:17.054665 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.054928 kubelet[2178]: E1101 04:18:17.054918 2178 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 04:18:17.055087 kubelet[2178]: W1101 04:18:17.055066 2178 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 04:18:17.055163 kubelet[2178]: E1101 04:18:17.055151 2178 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 04:18:17.227683 env[1306]: time="2025-11-01T04:18:17.227578817Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:18:17.231941 env[1306]: time="2025-11-01T04:18:17.231909414Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:18:17.232818 env[1306]: time="2025-11-01T04:18:17.232796192Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:18:17.234297 env[1306]: time="2025-11-01T04:18:17.234274316Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:18:17.234997 env[1306]: time="2025-11-01T04:18:17.234892081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 04:18:17.237576 env[1306]: time="2025-11-01T04:18:17.237552333Z" level=info msg="CreateContainer within sandbox \"b97b2323ed7a10404197694d4392512e19d77ef41500db8592aebeff02e59d2b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 04:18:17.251799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2176079126.mount: Deactivated successfully. Nov 1 04:18:17.254014 env[1306]: time="2025-11-01T04:18:17.253961807Z" level=info msg="CreateContainer within sandbox \"b97b2323ed7a10404197694d4392512e19d77ef41500db8592aebeff02e59d2b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"de4b9d8b4c72d251a51c40ac5be3b62ce16d68542fa952acc9cb37321bdd4d91\"" Nov 1 04:18:17.256474 env[1306]: time="2025-11-01T04:18:17.256438200Z" level=info msg="StartContainer for \"de4b9d8b4c72d251a51c40ac5be3b62ce16d68542fa952acc9cb37321bdd4d91\"" Nov 1 04:18:17.348039 env[1306]: time="2025-11-01T04:18:17.347991781Z" level=info msg="StartContainer for \"de4b9d8b4c72d251a51c40ac5be3b62ce16d68542fa952acc9cb37321bdd4d91\" returns successfully" Nov 1 04:18:17.396919 env[1306]: time="2025-11-01T04:18:17.396860885Z" level=info msg="shim disconnected" id=de4b9d8b4c72d251a51c40ac5be3b62ce16d68542fa952acc9cb37321bdd4d91 Nov 1 04:18:17.397248 env[1306]: time="2025-11-01T04:18:17.397227663Z" level=warning msg="cleaning up after shim disconnected" id=de4b9d8b4c72d251a51c40ac5be3b62ce16d68542fa952acc9cb37321bdd4d91 namespace=k8s.io Nov 1 04:18:17.397338 env[1306]: time="2025-11-01T04:18:17.397315275Z" level=info msg="cleaning up dead shim" Nov 1 04:18:17.413149 env[1306]: time="2025-11-01T04:18:17.413107325Z" level=warning msg="cleanup warnings time=\"2025-11-01T04:18:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2901 runtime=io.containerd.runc.v2\n" Nov 1 04:18:17.721760 systemd[1]: run-containerd-runc-k8s.io-de4b9d8b4c72d251a51c40ac5be3b62ce16d68542fa952acc9cb37321bdd4d91-runc.XxFS92.mount: Deactivated successfully. Nov 1 04:18:17.722588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de4b9d8b4c72d251a51c40ac5be3b62ce16d68542fa952acc9cb37321bdd4d91-rootfs.mount: Deactivated successfully. Nov 1 04:18:17.836613 kubelet[2178]: E1101 04:18:17.836555 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4ccfj" podUID="f7f98be6-9d36-4c44-bedf-cd179c76bbfe" Nov 1 04:18:17.949270 env[1306]: time="2025-11-01T04:18:17.949156279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 04:18:19.836624 kubelet[2178]: E1101 04:18:19.836555 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4ccfj" podUID="f7f98be6-9d36-4c44-bedf-cd179c76bbfe" Nov 1 04:18:21.837417 kubelet[2178]: E1101 04:18:21.837313 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4ccfj" podUID="f7f98be6-9d36-4c44-bedf-cd179c76bbfe" Nov 1 04:18:23.840288 kubelet[2178]: E1101 04:18:23.839874 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4ccfj" podUID="f7f98be6-9d36-4c44-bedf-cd179c76bbfe" Nov 1 04:18:24.013687 env[1306]: time="2025-11-01T04:18:24.013443018Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:18:24.016734 env[1306]: time="2025-11-01T04:18:24.016698981Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:18:24.018709 env[1306]: time="2025-11-01T04:18:24.018679473Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:18:24.020632 env[1306]: time="2025-11-01T04:18:24.020599695Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:18:24.021295 env[1306]: time="2025-11-01T04:18:24.021268998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 04:18:24.027545 env[1306]: time="2025-11-01T04:18:24.027510740Z" level=info msg="CreateContainer within sandbox \"b97b2323ed7a10404197694d4392512e19d77ef41500db8592aebeff02e59d2b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 04:18:24.046359 env[1306]: time="2025-11-01T04:18:24.043410066Z" level=info msg="CreateContainer within sandbox \"b97b2323ed7a10404197694d4392512e19d77ef41500db8592aebeff02e59d2b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c71256039ff4943679d8e7145f32499c3af0935f085fa10f2c4c88afef3baf82\"" Nov 1 04:18:24.045668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1536291196.mount: Deactivated successfully. Nov 1 04:18:24.048194 env[1306]: time="2025-11-01T04:18:24.048163518Z" level=info msg="StartContainer for \"c71256039ff4943679d8e7145f32499c3af0935f085fa10f2c4c88afef3baf82\"" Nov 1 04:18:24.083316 systemd[1]: run-containerd-runc-k8s.io-c71256039ff4943679d8e7145f32499c3af0935f085fa10f2c4c88afef3baf82-runc.cqtoIS.mount: Deactivated successfully. Nov 1 04:18:24.135033 env[1306]: time="2025-11-01T04:18:24.133674384Z" level=info msg="StartContainer for \"c71256039ff4943679d8e7145f32499c3af0935f085fa10f2c4c88afef3baf82\" returns successfully" Nov 1 04:18:24.902407 env[1306]: time="2025-11-01T04:18:24.902227459Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 04:18:24.937558 kubelet[2178]: I1101 04:18:24.935042 2178 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 04:18:24.952014 env[1306]: time="2025-11-01T04:18:24.951942314Z" level=info msg="shim disconnected" id=c71256039ff4943679d8e7145f32499c3af0935f085fa10f2c4c88afef3baf82 Nov 1 04:18:24.952014 env[1306]: time="2025-11-01T04:18:24.951997778Z" level=warning msg="cleaning up after shim disconnected" id=c71256039ff4943679d8e7145f32499c3af0935f085fa10f2c4c88afef3baf82 namespace=k8s.io Nov 1 04:18:24.952014 env[1306]: time="2025-11-01T04:18:24.952008982Z" level=info msg="cleaning up dead shim" Nov 1 04:18:25.000942 env[1306]: time="2025-11-01T04:18:24.995528625Z" level=warning msg="cleanup warnings time=\"2025-11-01T04:18:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2970 runtime=io.containerd.runc.v2\n" Nov 1 04:18:25.036287 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c71256039ff4943679d8e7145f32499c3af0935f085fa10f2c4c88afef3baf82-rootfs.mount: Deactivated successfully. Nov 1 04:18:25.157284 kubelet[2178]: I1101 04:18:25.157176 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prpcf\" (UniqueName: \"kubernetes.io/projected/cae162c8-7a97-49e7-94b4-82cbdda1df19-kube-api-access-prpcf\") pod \"calico-apiserver-6dbfc7dd7f-64jld\" (UID: \"cae162c8-7a97-49e7-94b4-82cbdda1df19\") " pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-64jld" Nov 1 04:18:25.157284 kubelet[2178]: I1101 04:18:25.157234 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cae162c8-7a97-49e7-94b4-82cbdda1df19-calico-apiserver-certs\") pod \"calico-apiserver-6dbfc7dd7f-64jld\" (UID: \"cae162c8-7a97-49e7-94b4-82cbdda1df19\") " pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-64jld" Nov 1 04:18:25.158042 env[1306]: time="2025-11-01T04:18:25.158002824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 04:18:25.257934 kubelet[2178]: I1101 04:18:25.257882 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqpx6\" (UniqueName: \"kubernetes.io/projected/e7537f60-17f7-4f3b-b511-49610ae00add-kube-api-access-hqpx6\") pod \"calico-apiserver-6dbfc7dd7f-w5zxm\" (UID: \"e7537f60-17f7-4f3b-b511-49610ae00add\") " pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-w5zxm" Nov 1 04:18:25.258145 kubelet[2178]: I1101 04:18:25.257995 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxmxz\" (UniqueName: \"kubernetes.io/projected/875389eb-2978-4aa4-ad6e-7b619ce206e3-kube-api-access-lxmxz\") pod \"calico-kube-controllers-5857878c7b-k98b2\" (UID: \"875389eb-2978-4aa4-ad6e-7b619ce206e3\") " pod="calico-system/calico-kube-controllers-5857878c7b-k98b2" Nov 1 04:18:25.258145 kubelet[2178]: I1101 04:18:25.258027 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gkxr\" (UniqueName: \"kubernetes.io/projected/5d45c12e-635b-40c2-a7fd-335986b99fcb-kube-api-access-7gkxr\") pod \"whisker-5869b76698-z4ssd\" (UID: \"5d45c12e-635b-40c2-a7fd-335986b99fcb\") " pod="calico-system/whisker-5869b76698-z4ssd" Nov 1 04:18:25.258145 kubelet[2178]: I1101 04:18:25.258046 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjxnr\" (UniqueName: \"kubernetes.io/projected/f6255d76-4d91-405c-b114-1f4e921c4b8b-kube-api-access-sjxnr\") pod \"coredns-668d6bf9bc-cvw4c\" (UID: \"f6255d76-4d91-405c-b114-1f4e921c4b8b\") " pod="kube-system/coredns-668d6bf9bc-cvw4c" Nov 1 04:18:25.258145 kubelet[2178]: I1101 04:18:25.258069 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d45c12e-635b-40c2-a7fd-335986b99fcb-whisker-ca-bundle\") pod \"whisker-5869b76698-z4ssd\" (UID: \"5d45c12e-635b-40c2-a7fd-335986b99fcb\") " pod="calico-system/whisker-5869b76698-z4ssd" Nov 1 04:18:25.258145 kubelet[2178]: I1101 04:18:25.258101 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/875389eb-2978-4aa4-ad6e-7b619ce206e3-tigera-ca-bundle\") pod \"calico-kube-controllers-5857878c7b-k98b2\" (UID: \"875389eb-2978-4aa4-ad6e-7b619ce206e3\") " pod="calico-system/calico-kube-controllers-5857878c7b-k98b2" Nov 1 04:18:25.258349 kubelet[2178]: I1101 04:18:25.258119 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a34f1cb-7b62-4a76-8ec0-fae704ad3a6a-config-volume\") pod \"coredns-668d6bf9bc-jpfzq\" (UID: \"5a34f1cb-7b62-4a76-8ec0-fae704ad3a6a\") " pod="kube-system/coredns-668d6bf9bc-jpfzq" Nov 1 04:18:25.258349 kubelet[2178]: I1101 04:18:25.258141 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6255d76-4d91-405c-b114-1f4e921c4b8b-config-volume\") pod \"coredns-668d6bf9bc-cvw4c\" (UID: \"f6255d76-4d91-405c-b114-1f4e921c4b8b\") " pod="kube-system/coredns-668d6bf9bc-cvw4c" Nov 1 04:18:25.258349 kubelet[2178]: I1101 04:18:25.258159 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt8lf\" (UniqueName: \"kubernetes.io/projected/5a34f1cb-7b62-4a76-8ec0-fae704ad3a6a-kube-api-access-qt8lf\") pod \"coredns-668d6bf9bc-jpfzq\" (UID: \"5a34f1cb-7b62-4a76-8ec0-fae704ad3a6a\") " pod="kube-system/coredns-668d6bf9bc-jpfzq" Nov 1 04:18:25.258349 kubelet[2178]: I1101 04:18:25.258213 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5d45c12e-635b-40c2-a7fd-335986b99fcb-whisker-backend-key-pair\") pod \"whisker-5869b76698-z4ssd\" (UID: \"5d45c12e-635b-40c2-a7fd-335986b99fcb\") " pod="calico-system/whisker-5869b76698-z4ssd" Nov 1 04:18:25.258349 kubelet[2178]: I1101 04:18:25.258232 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e7537f60-17f7-4f3b-b511-49610ae00add-calico-apiserver-certs\") pod \"calico-apiserver-6dbfc7dd7f-w5zxm\" (UID: \"e7537f60-17f7-4f3b-b511-49610ae00add\") " pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-w5zxm" Nov 1 04:18:25.359416 kubelet[2178]: I1101 04:18:25.359314 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hz7s\" (UniqueName: \"kubernetes.io/projected/09a6446d-f1c6-40ae-8ffc-711e84b66ed9-kube-api-access-2hz7s\") pod \"goldmane-666569f655-7nddq\" (UID: \"09a6446d-f1c6-40ae-8ffc-711e84b66ed9\") " pod="calico-system/goldmane-666569f655-7nddq" Nov 1 04:18:25.359652 kubelet[2178]: I1101 04:18:25.359603 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09a6446d-f1c6-40ae-8ffc-711e84b66ed9-goldmane-ca-bundle\") pod \"goldmane-666569f655-7nddq\" (UID: \"09a6446d-f1c6-40ae-8ffc-711e84b66ed9\") " pod="calico-system/goldmane-666569f655-7nddq" Nov 1 04:18:25.359698 kubelet[2178]: I1101 04:18:25.359674 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09a6446d-f1c6-40ae-8ffc-711e84b66ed9-config\") pod \"goldmane-666569f655-7nddq\" (UID: \"09a6446d-f1c6-40ae-8ffc-711e84b66ed9\") " pod="calico-system/goldmane-666569f655-7nddq" Nov 1 04:18:25.359965 kubelet[2178]: I1101 04:18:25.359938 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/09a6446d-f1c6-40ae-8ffc-711e84b66ed9-goldmane-key-pair\") pod \"goldmane-666569f655-7nddq\" (UID: \"09a6446d-f1c6-40ae-8ffc-711e84b66ed9\") " pod="calico-system/goldmane-666569f655-7nddq" Nov 1 04:18:25.465855 env[1306]: time="2025-11-01T04:18:25.464312564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cvw4c,Uid:f6255d76-4d91-405c-b114-1f4e921c4b8b,Namespace:kube-system,Attempt:0,}" Nov 1 04:18:25.468479 env[1306]: time="2025-11-01T04:18:25.468423339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5857878c7b-k98b2,Uid:875389eb-2978-4aa4-ad6e-7b619ce206e3,Namespace:calico-system,Attempt:0,}" Nov 1 04:18:25.484276 env[1306]: time="2025-11-01T04:18:25.483881293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dbfc7dd7f-w5zxm,Uid:e7537f60-17f7-4f3b-b511-49610ae00add,Namespace:calico-apiserver,Attempt:0,}" Nov 1 04:18:25.484276 env[1306]: time="2025-11-01T04:18:25.484249084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5869b76698-z4ssd,Uid:5d45c12e-635b-40c2-a7fd-335986b99fcb,Namespace:calico-system,Attempt:0,}" Nov 1 04:18:25.484820 env[1306]: time="2025-11-01T04:18:25.484504094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dbfc7dd7f-64jld,Uid:cae162c8-7a97-49e7-94b4-82cbdda1df19,Namespace:calico-apiserver,Attempt:0,}" Nov 1 04:18:25.484820 env[1306]: time="2025-11-01T04:18:25.484753933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jpfzq,Uid:5a34f1cb-7b62-4a76-8ec0-fae704ad3a6a,Namespace:kube-system,Attempt:0,}" Nov 1 04:18:25.708831 env[1306]: time="2025-11-01T04:18:25.708724916Z" level=error msg="Failed to destroy network for sandbox \"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.709208 env[1306]: time="2025-11-01T04:18:25.709171399Z" level=error msg="encountered an error cleaning up failed sandbox \"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.709294 env[1306]: time="2025-11-01T04:18:25.709230937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5857878c7b-k98b2,Uid:875389eb-2978-4aa4-ad6e-7b619ce206e3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.711177 kubelet[2178]: E1101 04:18:25.711038 2178 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.712930 kubelet[2178]: E1101 04:18:25.711702 2178 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5857878c7b-k98b2" Nov 1 04:18:25.712930 kubelet[2178]: E1101 04:18:25.711748 2178 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5857878c7b-k98b2" Nov 1 04:18:25.712930 kubelet[2178]: E1101 04:18:25.711808 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5857878c7b-k98b2_calico-system(875389eb-2978-4aa4-ad6e-7b619ce206e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5857878c7b-k98b2_calico-system(875389eb-2978-4aa4-ad6e-7b619ce206e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5857878c7b-k98b2" podUID="875389eb-2978-4aa4-ad6e-7b619ce206e3" Nov 1 04:18:25.717108 env[1306]: time="2025-11-01T04:18:25.716552461Z" level=error msg="Failed to destroy network for sandbox \"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.717992 env[1306]: time="2025-11-01T04:18:25.717914285Z" level=error msg="encountered an error cleaning up failed sandbox \"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.717992 env[1306]: time="2025-11-01T04:18:25.717980712Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cvw4c,Uid:f6255d76-4d91-405c-b114-1f4e921c4b8b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.719861 kubelet[2178]: E1101 04:18:25.718231 2178 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.719861 kubelet[2178]: E1101 04:18:25.718282 2178 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cvw4c" Nov 1 04:18:25.719861 kubelet[2178]: E1101 04:18:25.718301 2178 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cvw4c" Nov 1 04:18:25.720032 kubelet[2178]: E1101 04:18:25.718367 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cvw4c_kube-system(f6255d76-4d91-405c-b114-1f4e921c4b8b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cvw4c_kube-system(f6255d76-4d91-405c-b114-1f4e921c4b8b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cvw4c" podUID="f6255d76-4d91-405c-b114-1f4e921c4b8b" Nov 1 04:18:25.763944 env[1306]: time="2025-11-01T04:18:25.763878512Z" level=error msg="Failed to destroy network for sandbox \"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.764313 env[1306]: time="2025-11-01T04:18:25.764279160Z" level=error msg="encountered an error cleaning up failed sandbox \"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.764398 env[1306]: time="2025-11-01T04:18:25.764355356Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dbfc7dd7f-64jld,Uid:cae162c8-7a97-49e7-94b4-82cbdda1df19,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.765106 kubelet[2178]: E1101 04:18:25.764630 2178 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.765106 kubelet[2178]: E1101 04:18:25.764698 2178 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-64jld" Nov 1 04:18:25.765106 kubelet[2178]: E1101 04:18:25.764800 2178 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-64jld" Nov 1 04:18:25.765276 kubelet[2178]: E1101 04:18:25.764847 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dbfc7dd7f-64jld_calico-apiserver(cae162c8-7a97-49e7-94b4-82cbdda1df19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dbfc7dd7f-64jld_calico-apiserver(cae162c8-7a97-49e7-94b4-82cbdda1df19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-64jld" podUID="cae162c8-7a97-49e7-94b4-82cbdda1df19" Nov 1 04:18:25.771958 env[1306]: time="2025-11-01T04:18:25.771912087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7nddq,Uid:09a6446d-f1c6-40ae-8ffc-711e84b66ed9,Namespace:calico-system,Attempt:0,}" Nov 1 04:18:25.787097 env[1306]: time="2025-11-01T04:18:25.786793285Z" level=error msg="Failed to destroy network for sandbox \"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.788010 env[1306]: time="2025-11-01T04:18:25.787965606Z" level=error msg="encountered an error cleaning up failed sandbox \"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.788171 env[1306]: time="2025-11-01T04:18:25.788029012Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5869b76698-z4ssd,Uid:5d45c12e-635b-40c2-a7fd-335986b99fcb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.788715 kubelet[2178]: E1101 04:18:25.788369 2178 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.788715 kubelet[2178]: E1101 04:18:25.788431 2178 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5869b76698-z4ssd" Nov 1 04:18:25.788715 kubelet[2178]: E1101 04:18:25.788453 2178 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5869b76698-z4ssd" Nov 1 04:18:25.790115 kubelet[2178]: E1101 04:18:25.788522 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5869b76698-z4ssd_calico-system(5d45c12e-635b-40c2-a7fd-335986b99fcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5869b76698-z4ssd_calico-system(5d45c12e-635b-40c2-a7fd-335986b99fcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5869b76698-z4ssd" podUID="5d45c12e-635b-40c2-a7fd-335986b99fcb" Nov 1 04:18:25.791526 env[1306]: time="2025-11-01T04:18:25.791474078Z" level=error msg="Failed to destroy network for sandbox \"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.792162 env[1306]: time="2025-11-01T04:18:25.791810321Z" level=error msg="encountered an error cleaning up failed sandbox \"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.792162 env[1306]: time="2025-11-01T04:18:25.791860074Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dbfc7dd7f-w5zxm,Uid:e7537f60-17f7-4f3b-b511-49610ae00add,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.794169 kubelet[2178]: E1101 04:18:25.792658 2178 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.794169 kubelet[2178]: E1101 04:18:25.792710 2178 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-w5zxm" Nov 1 04:18:25.794169 kubelet[2178]: E1101 04:18:25.792735 2178 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-w5zxm" Nov 1 04:18:25.794375 kubelet[2178]: E1101 04:18:25.792780 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6dbfc7dd7f-w5zxm_calico-apiserver(e7537f60-17f7-4f3b-b511-49610ae00add)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6dbfc7dd7f-w5zxm_calico-apiserver(e7537f60-17f7-4f3b-b511-49610ae00add)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-w5zxm" podUID="e7537f60-17f7-4f3b-b511-49610ae00add" Nov 1 04:18:25.800351 env[1306]: time="2025-11-01T04:18:25.800280916Z" level=error msg="Failed to destroy network for sandbox \"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.800705 env[1306]: time="2025-11-01T04:18:25.800665335Z" level=error msg="encountered an error cleaning up failed sandbox \"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.800760 env[1306]: time="2025-11-01T04:18:25.800724182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jpfzq,Uid:5a34f1cb-7b62-4a76-8ec0-fae704ad3a6a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.801271 kubelet[2178]: E1101 04:18:25.800928 2178 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.801271 kubelet[2178]: E1101 04:18:25.800975 2178 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jpfzq" Nov 1 04:18:25.801271 kubelet[2178]: E1101 04:18:25.801011 2178 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jpfzq" Nov 1 04:18:25.802663 kubelet[2178]: E1101 04:18:25.801055 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jpfzq_kube-system(5a34f1cb-7b62-4a76-8ec0-fae704ad3a6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jpfzq_kube-system(5a34f1cb-7b62-4a76-8ec0-fae704ad3a6a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jpfzq" podUID="5a34f1cb-7b62-4a76-8ec0-fae704ad3a6a" Nov 1 04:18:25.840618 env[1306]: time="2025-11-01T04:18:25.840570653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4ccfj,Uid:f7f98be6-9d36-4c44-bedf-cd179c76bbfe,Namespace:calico-system,Attempt:0,}" Nov 1 04:18:25.861929 env[1306]: time="2025-11-01T04:18:25.861868769Z" level=error msg="Failed to destroy network for sandbox \"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.862514 env[1306]: time="2025-11-01T04:18:25.862465196Z" level=error msg="encountered an error cleaning up failed sandbox \"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.862659 env[1306]: time="2025-11-01T04:18:25.862633757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7nddq,Uid:09a6446d-f1c6-40ae-8ffc-711e84b66ed9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.864279 kubelet[2178]: E1101 04:18:25.863146 2178 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.864279 kubelet[2178]: E1101 04:18:25.863206 2178 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-7nddq" Nov 1 04:18:25.864279 kubelet[2178]: E1101 04:18:25.863233 2178 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-7nddq" Nov 1 04:18:25.864493 kubelet[2178]: E1101 04:18:25.863297 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-7nddq_calico-system(09a6446d-f1c6-40ae-8ffc-711e84b66ed9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-7nddq_calico-system(09a6446d-f1c6-40ae-8ffc-711e84b66ed9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-7nddq" podUID="09a6446d-f1c6-40ae-8ffc-711e84b66ed9" Nov 1 04:18:25.919661 env[1306]: time="2025-11-01T04:18:25.919554453Z" level=error msg="Failed to destroy network for sandbox \"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.920270 env[1306]: time="2025-11-01T04:18:25.920213472Z" level=error msg="encountered an error cleaning up failed sandbox \"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.920379 env[1306]: time="2025-11-01T04:18:25.920312266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4ccfj,Uid:f7f98be6-9d36-4c44-bedf-cd179c76bbfe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.921790 kubelet[2178]: E1101 04:18:25.920636 2178 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:25.921790 kubelet[2178]: E1101 04:18:25.920697 2178 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4ccfj" Nov 1 04:18:25.921790 kubelet[2178]: E1101 04:18:25.920727 2178 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4ccfj" Nov 1 04:18:25.921997 kubelet[2178]: E1101 04:18:25.920784 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4ccfj_calico-system(f7f98be6-9d36-4c44-bedf-cd179c76bbfe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4ccfj_calico-system(f7f98be6-9d36-4c44-bedf-cd179c76bbfe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4ccfj" podUID="f7f98be6-9d36-4c44-bedf-cd179c76bbfe" Nov 1 04:18:26.051431 kubelet[2178]: I1101 04:18:26.050916 2178 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" Nov 1 04:18:26.056467 env[1306]: time="2025-11-01T04:18:26.056435035Z" level=info msg="StopPodSandbox for \"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb\"" Nov 1 04:18:26.057063 kubelet[2178]: I1101 04:18:26.057000 2178 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" Nov 1 04:18:26.058354 env[1306]: time="2025-11-01T04:18:26.057793280Z" level=info msg="StopPodSandbox for \"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe\"" Nov 1 04:18:26.060103 kubelet[2178]: I1101 04:18:26.059578 2178 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" Nov 1 04:18:26.061356 env[1306]: time="2025-11-01T04:18:26.060847325Z" level=info msg="StopPodSandbox for \"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03\"" Nov 1 04:18:26.069216 kubelet[2178]: I1101 04:18:26.068199 2178 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" Nov 1 04:18:26.070441 env[1306]: time="2025-11-01T04:18:26.069930439Z" level=info msg="StopPodSandbox for \"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416\"" Nov 1 04:18:26.088986 kubelet[2178]: I1101 04:18:26.088957 2178 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" Nov 1 04:18:26.091240 env[1306]: time="2025-11-01T04:18:26.089973830Z" level=info msg="StopPodSandbox for \"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c\"" Nov 1 04:18:26.091957 kubelet[2178]: I1101 04:18:26.091907 2178 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" Nov 1 04:18:26.092591 env[1306]: time="2025-11-01T04:18:26.092560627Z" level=info msg="StopPodSandbox for \"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a\"" Nov 1 04:18:26.093698 kubelet[2178]: I1101 04:18:26.093386 2178 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" Nov 1 04:18:26.093796 env[1306]: time="2025-11-01T04:18:26.093770043Z" level=info msg="StopPodSandbox for \"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a\"" Nov 1 04:18:26.096792 kubelet[2178]: I1101 04:18:26.095692 2178 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" Nov 1 04:18:26.096882 env[1306]: time="2025-11-01T04:18:26.096116288Z" level=info msg="StopPodSandbox for \"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6\"" Nov 1 04:18:26.149866 env[1306]: time="2025-11-01T04:18:26.149806785Z" level=error msg="StopPodSandbox for \"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03\" failed" error="failed to destroy network for sandbox \"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:26.150567 kubelet[2178]: E1101 04:18:26.150273 2178 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" Nov 1 04:18:26.150567 kubelet[2178]: E1101 04:18:26.150385 2178 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03"} Nov 1 04:18:26.150567 kubelet[2178]: E1101 04:18:26.150471 2178 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09a6446d-f1c6-40ae-8ffc-711e84b66ed9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 04:18:26.150567 kubelet[2178]: E1101 04:18:26.150515 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09a6446d-f1c6-40ae-8ffc-711e84b66ed9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-7nddq" podUID="09a6446d-f1c6-40ae-8ffc-711e84b66ed9" Nov 1 04:18:26.153469 env[1306]: time="2025-11-01T04:18:26.153415739Z" level=error msg="StopPodSandbox for \"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe\" failed" error="failed to destroy network for sandbox \"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:26.153882 kubelet[2178]: E1101 04:18:26.153725 2178 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" Nov 1 04:18:26.153882 kubelet[2178]: E1101 04:18:26.153770 2178 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe"} Nov 1 04:18:26.153882 kubelet[2178]: E1101 04:18:26.153810 2178 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5a34f1cb-7b62-4a76-8ec0-fae704ad3a6a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 04:18:26.153882 kubelet[2178]: E1101 04:18:26.153834 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5a34f1cb-7b62-4a76-8ec0-fae704ad3a6a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jpfzq" podUID="5a34f1cb-7b62-4a76-8ec0-fae704ad3a6a" Nov 1 04:18:26.155152 env[1306]: time="2025-11-01T04:18:26.155117160Z" level=error msg="StopPodSandbox for \"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416\" failed" error="failed to destroy network for sandbox \"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:26.155529 kubelet[2178]: E1101 04:18:26.155395 2178 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" Nov 1 04:18:26.155529 kubelet[2178]: E1101 04:18:26.155426 2178 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416"} Nov 1 04:18:26.155529 kubelet[2178]: E1101 04:18:26.155465 2178 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f7f98be6-9d36-4c44-bedf-cd179c76bbfe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 04:18:26.155529 kubelet[2178]: E1101 04:18:26.155493 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f7f98be6-9d36-4c44-bedf-cd179c76bbfe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4ccfj" podUID="f7f98be6-9d36-4c44-bedf-cd179c76bbfe" Nov 1 04:18:26.165598 env[1306]: time="2025-11-01T04:18:26.165447825Z" level=error msg="StopPodSandbox for \"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb\" failed" error="failed to destroy network for sandbox \"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:26.168176 kubelet[2178]: E1101 04:18:26.168027 2178 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" Nov 1 04:18:26.168176 kubelet[2178]: E1101 04:18:26.168069 2178 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb"} Nov 1 04:18:26.168176 kubelet[2178]: E1101 04:18:26.168110 2178 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f6255d76-4d91-405c-b114-1f4e921c4b8b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 04:18:26.168176 kubelet[2178]: E1101 04:18:26.168135 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f6255d76-4d91-405c-b114-1f4e921c4b8b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cvw4c" podUID="f6255d76-4d91-405c-b114-1f4e921c4b8b" Nov 1 04:18:26.214846 env[1306]: time="2025-11-01T04:18:26.214742075Z" level=error msg="StopPodSandbox for \"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6\" failed" error="failed to destroy network for sandbox \"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:26.215993 kubelet[2178]: E1101 04:18:26.215599 2178 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" Nov 1 04:18:26.215993 kubelet[2178]: E1101 04:18:26.215726 2178 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6"} Nov 1 04:18:26.215993 kubelet[2178]: E1101 04:18:26.215824 2178 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d45c12e-635b-40c2-a7fd-335986b99fcb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 04:18:26.215993 kubelet[2178]: E1101 04:18:26.215900 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d45c12e-635b-40c2-a7fd-335986b99fcb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5869b76698-z4ssd" podUID="5d45c12e-635b-40c2-a7fd-335986b99fcb" Nov 1 04:18:26.216736 env[1306]: time="2025-11-01T04:18:26.216450266Z" level=error msg="StopPodSandbox for \"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a\" failed" error="failed to destroy network for sandbox \"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:26.217453 kubelet[2178]: E1101 04:18:26.217063 2178 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" Nov 1 04:18:26.217453 kubelet[2178]: E1101 04:18:26.217190 2178 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a"} Nov 1 04:18:26.217453 kubelet[2178]: E1101 04:18:26.217276 2178 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e7537f60-17f7-4f3b-b511-49610ae00add\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 04:18:26.217453 kubelet[2178]: E1101 04:18:26.217382 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e7537f60-17f7-4f3b-b511-49610ae00add\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-w5zxm" podUID="e7537f60-17f7-4f3b-b511-49610ae00add" Nov 1 04:18:26.234534 env[1306]: time="2025-11-01T04:18:26.234454751Z" level=error msg="StopPodSandbox for \"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a\" failed" error="failed to destroy network for sandbox \"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:26.235161 kubelet[2178]: E1101 04:18:26.234962 2178 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" Nov 1 04:18:26.235161 kubelet[2178]: E1101 04:18:26.235028 2178 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a"} Nov 1 04:18:26.235161 kubelet[2178]: E1101 04:18:26.235081 2178 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cae162c8-7a97-49e7-94b4-82cbdda1df19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 04:18:26.235161 kubelet[2178]: E1101 04:18:26.235109 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cae162c8-7a97-49e7-94b4-82cbdda1df19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-64jld" podUID="cae162c8-7a97-49e7-94b4-82cbdda1df19" Nov 1 04:18:26.237389 env[1306]: time="2025-11-01T04:18:26.237349265Z" level=error msg="StopPodSandbox for \"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c\" failed" error="failed to destroy network for sandbox \"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 04:18:26.237784 kubelet[2178]: E1101 04:18:26.237666 2178 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" Nov 1 04:18:26.237784 kubelet[2178]: E1101 04:18:26.237703 2178 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c"} Nov 1 04:18:26.237784 kubelet[2178]: E1101 04:18:26.237741 2178 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"875389eb-2978-4aa4-ad6e-7b619ce206e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 04:18:26.237784 kubelet[2178]: E1101 04:18:26.237762 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"875389eb-2978-4aa4-ad6e-7b619ce206e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5857878c7b-k98b2" podUID="875389eb-2978-4aa4-ad6e-7b619ce206e3" Nov 1 04:18:29.622609 kubelet[2178]: I1101 04:18:29.622489 2178 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 04:18:29.705000 audit[3333]: NETFILTER_CFG table=filter:101 family=2 entries=21 op=nft_register_rule pid=3333 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:29.720345 kernel: audit: type=1325 audit(1761970709.705:303): table=filter:101 family=2 entries=21 op=nft_register_rule pid=3333 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:29.705000 audit[3333]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffda97071d0 a2=0 a3=7ffda97071bc items=0 ppid=2289 pid=3333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:29.705000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:29.734603 kernel: audit: type=1300 audit(1761970709.705:303): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffda97071d0 a2=0 a3=7ffda97071bc items=0 ppid=2289 pid=3333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:29.734668 kernel: audit: type=1327 audit(1761970709.705:303): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:29.734711 kernel: audit: type=1325 audit(1761970709.724:304): table=nat:102 family=2 entries=19 op=nft_register_chain pid=3333 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:29.724000 audit[3333]: NETFILTER_CFG table=nat:102 family=2 entries=19 op=nft_register_chain pid=3333 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:29.724000 audit[3333]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffda97071d0 a2=0 a3=7ffda97071bc items=0 ppid=2289 pid=3333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:29.741440 kernel: audit: type=1300 audit(1761970709.724:304): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffda97071d0 a2=0 a3=7ffda97071bc items=0 ppid=2289 pid=3333 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:29.724000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:29.743675 kernel: audit: type=1327 audit(1761970709.724:304): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:35.106686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1138964654.mount: Deactivated successfully. Nov 1 04:18:35.134094 env[1306]: time="2025-11-01T04:18:35.134017930Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:18:35.136810 env[1306]: time="2025-11-01T04:18:35.136757208Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:18:35.138826 env[1306]: time="2025-11-01T04:18:35.138802858Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:18:35.140749 env[1306]: time="2025-11-01T04:18:35.140726720Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 04:18:35.141512 env[1306]: time="2025-11-01T04:18:35.141487846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 04:18:35.197623 env[1306]: time="2025-11-01T04:18:35.197556514Z" level=info msg="CreateContainer within sandbox \"b97b2323ed7a10404197694d4392512e19d77ef41500db8592aebeff02e59d2b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 04:18:35.221808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2961291711.mount: Deactivated successfully. Nov 1 04:18:35.224550 env[1306]: time="2025-11-01T04:18:35.224494686Z" level=info msg="CreateContainer within sandbox \"b97b2323ed7a10404197694d4392512e19d77ef41500db8592aebeff02e59d2b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6a993f3df2ffe2d60d4e0201f210657e0876693cb90b43227daec1c05a01b439\"" Nov 1 04:18:35.228073 env[1306]: time="2025-11-01T04:18:35.228017366Z" level=info msg="StartContainer for \"6a993f3df2ffe2d60d4e0201f210657e0876693cb90b43227daec1c05a01b439\"" Nov 1 04:18:35.305673 env[1306]: time="2025-11-01T04:18:35.304977780Z" level=info msg="StartContainer for \"6a993f3df2ffe2d60d4e0201f210657e0876693cb90b43227daec1c05a01b439\" returns successfully" Nov 1 04:18:35.524579 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 04:18:35.527742 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 04:18:35.750991 env[1306]: time="2025-11-01T04:18:35.750926724Z" level=info msg="StopPodSandbox for \"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6\"" Nov 1 04:18:36.083744 env[1306]: 2025-11-01 04:18:35.930 [INFO][3396] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" Nov 1 04:18:36.083744 env[1306]: 2025-11-01 04:18:35.930 [INFO][3396] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" iface="eth0" netns="/var/run/netns/cni-2605fe53-52c2-8d06-ed75-3fb32b8a4ac9" Nov 1 04:18:36.083744 env[1306]: 2025-11-01 04:18:35.931 [INFO][3396] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" iface="eth0" netns="/var/run/netns/cni-2605fe53-52c2-8d06-ed75-3fb32b8a4ac9" Nov 1 04:18:36.083744 env[1306]: 2025-11-01 04:18:35.931 [INFO][3396] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" iface="eth0" netns="/var/run/netns/cni-2605fe53-52c2-8d06-ed75-3fb32b8a4ac9" Nov 1 04:18:36.083744 env[1306]: 2025-11-01 04:18:35.931 [INFO][3396] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" Nov 1 04:18:36.083744 env[1306]: 2025-11-01 04:18:35.931 [INFO][3396] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" Nov 1 04:18:36.083744 env[1306]: 2025-11-01 04:18:36.050 [INFO][3403] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" HandleID="k8s-pod-network.7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" Workload="srv--i9e8z.gb1.brightbox.com-k8s-whisker--5869b76698--z4ssd-eth0" Nov 1 04:18:36.083744 env[1306]: 2025-11-01 04:18:36.052 [INFO][3403] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:36.083744 env[1306]: 2025-11-01 04:18:36.053 [INFO][3403] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:36.083744 env[1306]: 2025-11-01 04:18:36.067 [WARNING][3403] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" HandleID="k8s-pod-network.7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" Workload="srv--i9e8z.gb1.brightbox.com-k8s-whisker--5869b76698--z4ssd-eth0" Nov 1 04:18:36.083744 env[1306]: 2025-11-01 04:18:36.067 [INFO][3403] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" HandleID="k8s-pod-network.7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" Workload="srv--i9e8z.gb1.brightbox.com-k8s-whisker--5869b76698--z4ssd-eth0" Nov 1 04:18:36.083744 env[1306]: 2025-11-01 04:18:36.071 [INFO][3403] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:36.083744 env[1306]: 2025-11-01 04:18:36.075 [INFO][3396] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" Nov 1 04:18:36.086294 env[1306]: time="2025-11-01T04:18:36.084951855Z" level=info msg="TearDown network for sandbox \"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6\" successfully" Nov 1 04:18:36.086294 env[1306]: time="2025-11-01T04:18:36.085030187Z" level=info msg="StopPodSandbox for \"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6\" returns successfully" Nov 1 04:18:36.111144 systemd[1]: run-netns-cni\x2d2605fe53\x2d52c2\x2d8d06\x2ded75\x2d3fb32b8a4ac9.mount: Deactivated successfully. Nov 1 04:18:36.273904 kubelet[2178]: I1101 04:18:36.273768 2178 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5d45c12e-635b-40c2-a7fd-335986b99fcb-whisker-backend-key-pair\") pod \"5d45c12e-635b-40c2-a7fd-335986b99fcb\" (UID: \"5d45c12e-635b-40c2-a7fd-335986b99fcb\") " Nov 1 04:18:36.280460 kubelet[2178]: I1101 04:18:36.280397 2178 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d45c12e-635b-40c2-a7fd-335986b99fcb-whisker-ca-bundle\") pod \"5d45c12e-635b-40c2-a7fd-335986b99fcb\" (UID: \"5d45c12e-635b-40c2-a7fd-335986b99fcb\") " Nov 1 04:18:36.281165 kubelet[2178]: I1101 04:18:36.280830 2178 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gkxr\" (UniqueName: \"kubernetes.io/projected/5d45c12e-635b-40c2-a7fd-335986b99fcb-kube-api-access-7gkxr\") pod \"5d45c12e-635b-40c2-a7fd-335986b99fcb\" (UID: \"5d45c12e-635b-40c2-a7fd-335986b99fcb\") " Nov 1 04:18:36.304847 systemd[1]: var-lib-kubelet-pods-5d45c12e\x2d635b\x2d40c2\x2da7fd\x2d335986b99fcb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7gkxr.mount: Deactivated successfully. Nov 1 04:18:36.326933 kubelet[2178]: I1101 04:18:36.326883 2178 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d45c12e-635b-40c2-a7fd-335986b99fcb-kube-api-access-7gkxr" (OuterVolumeSpecName: "kube-api-access-7gkxr") pod "5d45c12e-635b-40c2-a7fd-335986b99fcb" (UID: "5d45c12e-635b-40c2-a7fd-335986b99fcb"). InnerVolumeSpecName "kube-api-access-7gkxr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 04:18:36.327212 kubelet[2178]: I1101 04:18:36.310212 2178 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d45c12e-635b-40c2-a7fd-335986b99fcb-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5d45c12e-635b-40c2-a7fd-335986b99fcb" (UID: "5d45c12e-635b-40c2-a7fd-335986b99fcb"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 04:18:36.333189 systemd[1]: var-lib-kubelet-pods-5d45c12e\x2d635b\x2d40c2\x2da7fd\x2d335986b99fcb-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 04:18:36.338892 kubelet[2178]: I1101 04:18:36.338819 2178 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d45c12e-635b-40c2-a7fd-335986b99fcb-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5d45c12e-635b-40c2-a7fd-335986b99fcb" (UID: "5d45c12e-635b-40c2-a7fd-335986b99fcb"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 04:18:36.400209 kubelet[2178]: I1101 04:18:36.400133 2178 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5d45c12e-635b-40c2-a7fd-335986b99fcb-whisker-backend-key-pair\") on node \"srv-i9e8z.gb1.brightbox.com\" DevicePath \"\"" Nov 1 04:18:36.400618 kubelet[2178]: I1101 04:18:36.400581 2178 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d45c12e-635b-40c2-a7fd-335986b99fcb-whisker-ca-bundle\") on node \"srv-i9e8z.gb1.brightbox.com\" DevicePath \"\"" Nov 1 04:18:36.400849 kubelet[2178]: I1101 04:18:36.400808 2178 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7gkxr\" (UniqueName: \"kubernetes.io/projected/5d45c12e-635b-40c2-a7fd-335986b99fcb-kube-api-access-7gkxr\") on node \"srv-i9e8z.gb1.brightbox.com\" DevicePath \"\"" Nov 1 04:18:36.479662 kubelet[2178]: I1101 04:18:36.476315 2178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n5nnw" podStartSLOduration=2.178380917 podStartE2EDuration="24.476279665s" podCreationTimestamp="2025-11-01 04:18:12 +0000 UTC" firstStartedPulling="2025-11-01 04:18:12.845199691 +0000 UTC m=+20.306102388" lastFinishedPulling="2025-11-01 04:18:35.143098423 +0000 UTC m=+42.604001136" observedRunningTime="2025-11-01 04:18:36.173050382 +0000 UTC m=+43.633953198" watchObservedRunningTime="2025-11-01 04:18:36.476279665 +0000 UTC m=+43.937182414" Nov 1 04:18:36.708094 kubelet[2178]: I1101 04:18:36.707998 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/10fb6fc1-31fc-4d32-a2e8-032e174f09df-whisker-backend-key-pair\") pod \"whisker-585f46d5f6-8lv48\" (UID: \"10fb6fc1-31fc-4d32-a2e8-032e174f09df\") " pod="calico-system/whisker-585f46d5f6-8lv48" Nov 1 04:18:36.708732 kubelet[2178]: I1101 04:18:36.708629 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzqrz\" (UniqueName: \"kubernetes.io/projected/10fb6fc1-31fc-4d32-a2e8-032e174f09df-kube-api-access-gzqrz\") pod \"whisker-585f46d5f6-8lv48\" (UID: \"10fb6fc1-31fc-4d32-a2e8-032e174f09df\") " pod="calico-system/whisker-585f46d5f6-8lv48" Nov 1 04:18:36.708902 kubelet[2178]: I1101 04:18:36.708772 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10fb6fc1-31fc-4d32-a2e8-032e174f09df-whisker-ca-bundle\") pod \"whisker-585f46d5f6-8lv48\" (UID: \"10fb6fc1-31fc-4d32-a2e8-032e174f09df\") " pod="calico-system/whisker-585f46d5f6-8lv48" Nov 1 04:18:36.837210 env[1306]: time="2025-11-01T04:18:36.836561999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-585f46d5f6-8lv48,Uid:10fb6fc1-31fc-4d32-a2e8-032e174f09df,Namespace:calico-system,Attempt:0,}" Nov 1 04:18:36.844316 env[1306]: time="2025-11-01T04:18:36.843510635Z" level=info msg="StopPodSandbox for \"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe\"" Nov 1 04:18:36.854133 env[1306]: time="2025-11-01T04:18:36.854061392Z" level=info msg="StopPodSandbox for \"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416\"" Nov 1 04:18:36.857055 kubelet[2178]: I1101 04:18:36.856991 2178 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d45c12e-635b-40c2-a7fd-335986b99fcb" path="/var/lib/kubelet/pods/5d45c12e-635b-40c2-a7fd-335986b99fcb/volumes" Nov 1 04:18:37.104012 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 04:18:37.104183 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif7062d948bb: link becomes ready Nov 1 04:18:37.108378 env[1306]: 2025-11-01 04:18:36.963 [INFO][3456] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" Nov 1 04:18:37.108378 env[1306]: 2025-11-01 04:18:36.963 [INFO][3456] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" iface="eth0" netns="/var/run/netns/cni-68e73188-59ae-bb59-93c8-54b7939fb26d" Nov 1 04:18:37.108378 env[1306]: 2025-11-01 04:18:36.967 [INFO][3456] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" iface="eth0" netns="/var/run/netns/cni-68e73188-59ae-bb59-93c8-54b7939fb26d" Nov 1 04:18:37.108378 env[1306]: 2025-11-01 04:18:36.969 [INFO][3456] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" iface="eth0" netns="/var/run/netns/cni-68e73188-59ae-bb59-93c8-54b7939fb26d" Nov 1 04:18:37.108378 env[1306]: 2025-11-01 04:18:36.969 [INFO][3456] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" Nov 1 04:18:37.108378 env[1306]: 2025-11-01 04:18:36.969 [INFO][3456] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" Nov 1 04:18:37.108378 env[1306]: 2025-11-01 04:18:37.012 [INFO][3475] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" HandleID="k8s-pod-network.00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" Workload="srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0" Nov 1 04:18:37.108378 env[1306]: 2025-11-01 04:18:37.012 [INFO][3475] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:37.108378 env[1306]: 2025-11-01 04:18:37.076 [INFO][3475] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:37.108378 env[1306]: 2025-11-01 04:18:37.088 [WARNING][3475] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" HandleID="k8s-pod-network.00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" Workload="srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0" Nov 1 04:18:37.108378 env[1306]: 2025-11-01 04:18:37.088 [INFO][3475] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" HandleID="k8s-pod-network.00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" Workload="srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0" Nov 1 04:18:37.108378 env[1306]: 2025-11-01 04:18:37.090 [INFO][3475] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:37.108378 env[1306]: 2025-11-01 04:18:37.093 [INFO][3456] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" Nov 1 04:18:37.111436 systemd-networkd[1086]: calif7062d948bb: Link UP Nov 1 04:18:37.111984 systemd-networkd[1086]: calif7062d948bb: Gained carrier Nov 1 04:18:37.119516 systemd[1]: run-netns-cni\x2d68e73188\x2d59ae\x2dbb59\x2d93c8\x2d54b7939fb26d.mount: Deactivated successfully. Nov 1 04:18:37.122887 env[1306]: time="2025-11-01T04:18:37.122823135Z" level=info msg="TearDown network for sandbox \"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416\" successfully" Nov 1 04:18:37.122887 env[1306]: time="2025-11-01T04:18:37.122886221Z" level=info msg="StopPodSandbox for \"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416\" returns successfully" Nov 1 04:18:37.128000 audit[3537]: AVC avc: denied { write } for pid=3537 comm="tee" name="fd" dev="proc" ino=29961 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 04:18:37.136605 kernel: audit: type=1400 audit(1761970717.128:305): avc: denied { write } for pid=3537 comm="tee" name="fd" dev="proc" ino=29961 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 04:18:37.136684 kernel: audit: type=1300 audit(1761970717.128:305): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcbc0ff7c5 a2=241 a3=1b6 items=1 ppid=3501 pid=3537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:37.128000 audit[3537]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcbc0ff7c5 a2=241 a3=1b6 items=1 ppid=3501 pid=3537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:37.136981 env[1306]: time="2025-11-01T04:18:37.132460223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4ccfj,Uid:f7f98be6-9d36-4c44-bedf-cd179c76bbfe,Namespace:calico-system,Attempt:1,}" Nov 1 04:18:37.144730 kernel: audit: type=1400 audit(1761970717.138:306): avc: denied { write } for pid=3533 comm="tee" name="fd" dev="proc" ino=29441 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 04:18:37.138000 audit[3533]: AVC avc: denied { write } for pid=3533 comm="tee" name="fd" dev="proc" ino=29441 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 04:18:37.138000 audit[3533]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcfa7027b6 a2=241 a3=1b6 items=1 ppid=3505 pid=3533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:37.151376 kernel: audit: type=1300 audit(1761970717.138:306): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcfa7027b6 a2=241 a3=1b6 items=1 ppid=3505 pid=3533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:37.138000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Nov 1 04:18:37.154548 kernel: audit: type=1307 audit(1761970717.138:306): cwd="/etc/service/enabled/node-status-reporter/log" Nov 1 04:18:37.162516 kernel: audit: type=1302 audit(1761970717.138:306): item=0 name="/dev/fd/63" inode=29430 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:18:37.138000 audit: PATH item=0 name="/dev/fd/63" inode=29430 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:18:37.165036 kubelet[2178]: I1101 04:18:37.161293 2178 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 04:18:37.186558 kernel: audit: type=1327 audit(1761970717.138:306): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 04:18:37.138000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 04:18:37.195562 kernel: audit: type=1307 audit(1761970717.128:305): cwd="/etc/service/enabled/bird6/log" Nov 1 04:18:37.195668 kernel: audit: type=1302 audit(1761970717.128:305): item=0 name="/dev/fd/63" inode=29431 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:18:37.128000 audit: CWD cwd="/etc/service/enabled/bird6/log" Nov 1 04:18:37.128000 audit: PATH item=0 name="/dev/fd/63" inode=29431 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:18:37.201047 kernel: audit: type=1327 audit(1761970717.128:305): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 04:18:37.128000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 04:18:37.204000 audit[3551]: AVC avc: denied { write } for pid=3551 comm="tee" name="fd" dev="proc" ino=29522 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 04:18:37.204000 audit[3551]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc8cee37c5 a2=241 a3=1b6 items=1 ppid=3498 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:37.204000 audit: CWD cwd="/etc/service/enabled/felix/log" Nov 1 04:18:37.204000 audit: PATH item=0 name="/dev/fd/63" inode=29449 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:18:37.204000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 04:18:37.211000 audit[3543]: AVC avc: denied { write } for pid=3543 comm="tee" name="fd" dev="proc" ino=29526 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 04:18:37.211000 audit[3543]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeb46fb7c6 a2=241 a3=1b6 items=1 ppid=3497 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:37.211000 audit: CWD cwd="/etc/service/enabled/bird/log" Nov 1 04:18:37.211000 audit: PATH item=0 name="/dev/fd/63" inode=29443 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:18:37.211000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 04:18:37.212558 env[1306]: 2025-11-01 04:18:36.884 [INFO][3429] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 04:18:37.212558 env[1306]: 2025-11-01 04:18:36.900 [INFO][3429] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i9e8z.gb1.brightbox.com-k8s-whisker--585f46d5f6--8lv48-eth0 whisker-585f46d5f6- calico-system 10fb6fc1-31fc-4d32-a2e8-032e174f09df 891 0 2025-11-01 04:18:36 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:585f46d5f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-i9e8z.gb1.brightbox.com whisker-585f46d5f6-8lv48 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif7062d948bb [] [] }} ContainerID="a6dfae10f80b9b5ec5af57126ab351cc5b63d5487a6f288b83c8d74ca1093910" Namespace="calico-system" Pod="whisker-585f46d5f6-8lv48" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-whisker--585f46d5f6--8lv48-" Nov 1 04:18:37.212558 env[1306]: 2025-11-01 04:18:36.900 [INFO][3429] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6dfae10f80b9b5ec5af57126ab351cc5b63d5487a6f288b83c8d74ca1093910" Namespace="calico-system" Pod="whisker-585f46d5f6-8lv48" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-whisker--585f46d5f6--8lv48-eth0" Nov 1 04:18:37.212558 env[1306]: 2025-11-01 04:18:36.999 [INFO][3466] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6dfae10f80b9b5ec5af57126ab351cc5b63d5487a6f288b83c8d74ca1093910" HandleID="k8s-pod-network.a6dfae10f80b9b5ec5af57126ab351cc5b63d5487a6f288b83c8d74ca1093910" Workload="srv--i9e8z.gb1.brightbox.com-k8s-whisker--585f46d5f6--8lv48-eth0" Nov 1 04:18:37.212558 env[1306]: 2025-11-01 04:18:37.003 [INFO][3466] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a6dfae10f80b9b5ec5af57126ab351cc5b63d5487a6f288b83c8d74ca1093910" HandleID="k8s-pod-network.a6dfae10f80b9b5ec5af57126ab351cc5b63d5487a6f288b83c8d74ca1093910" Workload="srv--i9e8z.gb1.brightbox.com-k8s-whisker--585f46d5f6--8lv48-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002add70), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-i9e8z.gb1.brightbox.com", "pod":"whisker-585f46d5f6-8lv48", "timestamp":"2025-11-01 04:18:36.999264995 +0000 UTC"}, Hostname:"srv-i9e8z.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 04:18:37.212558 env[1306]: 2025-11-01 04:18:37.005 [INFO][3466] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:37.212558 env[1306]: 2025-11-01 04:18:37.005 [INFO][3466] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:37.212558 env[1306]: 2025-11-01 04:18:37.005 [INFO][3466] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i9e8z.gb1.brightbox.com' Nov 1 04:18:37.212558 env[1306]: 2025-11-01 04:18:37.015 [INFO][3466] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6dfae10f80b9b5ec5af57126ab351cc5b63d5487a6f288b83c8d74ca1093910" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.212558 env[1306]: 2025-11-01 04:18:37.034 [INFO][3466] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.212558 env[1306]: 2025-11-01 04:18:37.040 [INFO][3466] ipam/ipam.go 511: Trying affinity for 192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.212558 env[1306]: 2025-11-01 04:18:37.043 [INFO][3466] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.212558 env[1306]: 2025-11-01 04:18:37.050 [INFO][3466] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.212558 env[1306]: 2025-11-01 04:18:37.051 [INFO][3466] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.64/26 handle="k8s-pod-network.a6dfae10f80b9b5ec5af57126ab351cc5b63d5487a6f288b83c8d74ca1093910" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.212558 env[1306]: 2025-11-01 04:18:37.060 [INFO][3466] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a6dfae10f80b9b5ec5af57126ab351cc5b63d5487a6f288b83c8d74ca1093910 Nov 1 04:18:37.212558 env[1306]: 2025-11-01 04:18:37.066 [INFO][3466] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.64/26 handle="k8s-pod-network.a6dfae10f80b9b5ec5af57126ab351cc5b63d5487a6f288b83c8d74ca1093910" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.212558 env[1306]: 2025-11-01 04:18:37.076 [INFO][3466] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.65/26] block=192.168.47.64/26 handle="k8s-pod-network.a6dfae10f80b9b5ec5af57126ab351cc5b63d5487a6f288b83c8d74ca1093910" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.212558 env[1306]: 2025-11-01 04:18:37.076 [INFO][3466] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.65/26] handle="k8s-pod-network.a6dfae10f80b9b5ec5af57126ab351cc5b63d5487a6f288b83c8d74ca1093910" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.212558 env[1306]: 2025-11-01 04:18:37.076 [INFO][3466] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:37.212558 env[1306]: 2025-11-01 04:18:37.076 [INFO][3466] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.65/26] IPv6=[] ContainerID="a6dfae10f80b9b5ec5af57126ab351cc5b63d5487a6f288b83c8d74ca1093910" HandleID="k8s-pod-network.a6dfae10f80b9b5ec5af57126ab351cc5b63d5487a6f288b83c8d74ca1093910" Workload="srv--i9e8z.gb1.brightbox.com-k8s-whisker--585f46d5f6--8lv48-eth0" Nov 1 04:18:37.213529 env[1306]: 2025-11-01 04:18:37.081 [INFO][3429] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6dfae10f80b9b5ec5af57126ab351cc5b63d5487a6f288b83c8d74ca1093910" Namespace="calico-system" Pod="whisker-585f46d5f6-8lv48" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-whisker--585f46d5f6--8lv48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-whisker--585f46d5f6--8lv48-eth0", GenerateName:"whisker-585f46d5f6-", Namespace:"calico-system", SelfLink:"", UID:"10fb6fc1-31fc-4d32-a2e8-032e174f09df", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 18, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"585f46d5f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"", Pod:"whisker-585f46d5f6-8lv48", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.47.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif7062d948bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:37.213529 env[1306]: 2025-11-01 04:18:37.081 [INFO][3429] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.65/32] ContainerID="a6dfae10f80b9b5ec5af57126ab351cc5b63d5487a6f288b83c8d74ca1093910" Namespace="calico-system" Pod="whisker-585f46d5f6-8lv48" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-whisker--585f46d5f6--8lv48-eth0" Nov 1 04:18:37.213529 env[1306]: 2025-11-01 04:18:37.081 [INFO][3429] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7062d948bb ContainerID="a6dfae10f80b9b5ec5af57126ab351cc5b63d5487a6f288b83c8d74ca1093910" Namespace="calico-system" Pod="whisker-585f46d5f6-8lv48" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-whisker--585f46d5f6--8lv48-eth0" Nov 1 04:18:37.213529 env[1306]: 2025-11-01 04:18:37.164 [INFO][3429] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6dfae10f80b9b5ec5af57126ab351cc5b63d5487a6f288b83c8d74ca1093910" Namespace="calico-system" Pod="whisker-585f46d5f6-8lv48" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-whisker--585f46d5f6--8lv48-eth0" Nov 1 04:18:37.213529 env[1306]: 2025-11-01 04:18:37.166 [INFO][3429] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6dfae10f80b9b5ec5af57126ab351cc5b63d5487a6f288b83c8d74ca1093910" Namespace="calico-system" Pod="whisker-585f46d5f6-8lv48" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-whisker--585f46d5f6--8lv48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-whisker--585f46d5f6--8lv48-eth0", GenerateName:"whisker-585f46d5f6-", Namespace:"calico-system", SelfLink:"", UID:"10fb6fc1-31fc-4d32-a2e8-032e174f09df", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 18, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"585f46d5f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"a6dfae10f80b9b5ec5af57126ab351cc5b63d5487a6f288b83c8d74ca1093910", Pod:"whisker-585f46d5f6-8lv48", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.47.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif7062d948bb", MAC:"3e:19:fc:ec:78:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:37.213529 env[1306]: 2025-11-01 04:18:37.206 [INFO][3429] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6dfae10f80b9b5ec5af57126ab351cc5b63d5487a6f288b83c8d74ca1093910" Namespace="calico-system" Pod="whisker-585f46d5f6-8lv48" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-whisker--585f46d5f6--8lv48-eth0" Nov 1 04:18:37.227000 audit[3557]: AVC avc: denied { write } for pid=3557 comm="tee" name="fd" dev="proc" ino=29540 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 04:18:37.227000 audit[3557]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe3209c7c7 a2=241 a3=1b6 items=1 ppid=3503 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:37.227000 audit: CWD cwd="/etc/service/enabled/cni/log" Nov 1 04:18:37.227000 audit: PATH item=0 name="/dev/fd/63" inode=29460 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:18:37.227000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 04:18:37.230000 audit[3548]: AVC avc: denied { write } for pid=3548 comm="tee" name="fd" dev="proc" ino=29546 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 04:18:37.230000 audit[3548]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe34cad7c5 a2=241 a3=1b6 items=1 ppid=3502 pid=3548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:37.230000 audit: CWD cwd="/etc/service/enabled/confd/log" Nov 1 04:18:37.230000 audit: PATH item=0 name="/dev/fd/63" inode=29446 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:18:37.230000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 04:18:37.236462 env[1306]: 2025-11-01 04:18:36.962 [INFO][3447] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" Nov 1 04:18:37.236462 env[1306]: 2025-11-01 04:18:36.963 [INFO][3447] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" iface="eth0" netns="/var/run/netns/cni-a6b994d3-9923-cd59-0164-bfbdaa16efb1" Nov 1 04:18:37.236462 env[1306]: 2025-11-01 04:18:36.964 [INFO][3447] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" iface="eth0" netns="/var/run/netns/cni-a6b994d3-9923-cd59-0164-bfbdaa16efb1" Nov 1 04:18:37.236462 env[1306]: 2025-11-01 04:18:36.968 [INFO][3447] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" iface="eth0" netns="/var/run/netns/cni-a6b994d3-9923-cd59-0164-bfbdaa16efb1" Nov 1 04:18:37.236462 env[1306]: 2025-11-01 04:18:36.968 [INFO][3447] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" Nov 1 04:18:37.236462 env[1306]: 2025-11-01 04:18:36.968 [INFO][3447] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" Nov 1 04:18:37.236462 env[1306]: 2025-11-01 04:18:37.051 [INFO][3479] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" HandleID="k8s-pod-network.5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0" Nov 1 04:18:37.236462 env[1306]: 2025-11-01 04:18:37.051 [INFO][3479] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:37.236462 env[1306]: 2025-11-01 04:18:37.093 [INFO][3479] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:37.236462 env[1306]: 2025-11-01 04:18:37.196 [WARNING][3479] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" HandleID="k8s-pod-network.5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0" Nov 1 04:18:37.236462 env[1306]: 2025-11-01 04:18:37.196 [INFO][3479] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" HandleID="k8s-pod-network.5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0" Nov 1 04:18:37.236462 env[1306]: 2025-11-01 04:18:37.206 [INFO][3479] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:37.236462 env[1306]: 2025-11-01 04:18:37.231 [INFO][3447] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" Nov 1 04:18:37.236459 systemd[1]: run-netns-cni\x2da6b994d3\x2d9923\x2dcd59\x2d0164\x2dbfbdaa16efb1.mount: Deactivated successfully. Nov 1 04:18:37.237553 env[1306]: time="2025-11-01T04:18:37.237198650Z" level=info msg="TearDown network for sandbox \"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe\" successfully" Nov 1 04:18:37.237553 env[1306]: time="2025-11-01T04:18:37.237338345Z" level=info msg="StopPodSandbox for \"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe\" returns successfully" Nov 1 04:18:37.241563 env[1306]: time="2025-11-01T04:18:37.240049160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jpfzq,Uid:5a34f1cb-7b62-4a76-8ec0-fae704ad3a6a,Namespace:kube-system,Attempt:1,}" Nov 1 04:18:37.293000 audit[3575]: AVC avc: denied { write } for pid=3575 comm="tee" name="fd" dev="proc" ino=29988 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Nov 1 04:18:37.293000 audit[3575]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffff19437b5 a2=241 a3=1b6 items=1 ppid=3495 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:37.293000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Nov 1 04:18:37.293000 audit: PATH item=0 name="/dev/fd/63" inode=29981 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 04:18:37.293000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Nov 1 04:18:37.362536 env[1306]: time="2025-11-01T04:18:37.361940171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 04:18:37.362536 env[1306]: time="2025-11-01T04:18:37.362020182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 04:18:37.362536 env[1306]: time="2025-11-01T04:18:37.362040651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 04:18:37.362536 env[1306]: time="2025-11-01T04:18:37.362204046Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a6dfae10f80b9b5ec5af57126ab351cc5b63d5487a6f288b83c8d74ca1093910 pid=3610 runtime=io.containerd.runc.v2 Nov 1 04:18:37.618362 env[1306]: time="2025-11-01T04:18:37.618093465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-585f46d5f6-8lv48,Uid:10fb6fc1-31fc-4d32-a2e8-032e174f09df,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6dfae10f80b9b5ec5af57126ab351cc5b63d5487a6f288b83c8d74ca1093910\"" Nov 1 04:18:37.645048 env[1306]: time="2025-11-01T04:18:37.645004278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 04:18:37.687205 systemd-networkd[1086]: calia6a311739f5: Link UP Nov 1 04:18:37.714516 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia6a311739f5: link becomes ready Nov 1 04:18:37.717424 systemd-networkd[1086]: calia6a311739f5: Gained carrier Nov 1 04:18:37.776388 env[1306]: 2025-11-01 04:18:37.373 [INFO][3583] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 04:18:37.776388 env[1306]: 2025-11-01 04:18:37.403 [INFO][3583] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0 coredns-668d6bf9bc- kube-system 5a34f1cb-7b62-4a76-8ec0-fae704ad3a6a 896 0 2025-11-01 04:17:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-i9e8z.gb1.brightbox.com coredns-668d6bf9bc-jpfzq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia6a311739f5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504" Namespace="kube-system" Pod="coredns-668d6bf9bc-jpfzq" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-" Nov 1 04:18:37.776388 env[1306]: 2025-11-01 04:18:37.403 [INFO][3583] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504" Namespace="kube-system" Pod="coredns-668d6bf9bc-jpfzq" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0" Nov 1 04:18:37.776388 env[1306]: 2025-11-01 04:18:37.546 [INFO][3636] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504" HandleID="k8s-pod-network.8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0" Nov 1 04:18:37.776388 env[1306]: 2025-11-01 04:18:37.547 [INFO][3636] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504" HandleID="k8s-pod-network.8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd5a0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-i9e8z.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-jpfzq", "timestamp":"2025-11-01 04:18:37.546379389 +0000 UTC"}, Hostname:"srv-i9e8z.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 04:18:37.776388 env[1306]: 2025-11-01 04:18:37.557 [INFO][3636] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:37.776388 env[1306]: 2025-11-01 04:18:37.558 [INFO][3636] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:37.776388 env[1306]: 2025-11-01 04:18:37.558 [INFO][3636] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i9e8z.gb1.brightbox.com' Nov 1 04:18:37.776388 env[1306]: 2025-11-01 04:18:37.578 [INFO][3636] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.776388 env[1306]: 2025-11-01 04:18:37.611 [INFO][3636] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.776388 env[1306]: 2025-11-01 04:18:37.630 [INFO][3636] ipam/ipam.go 511: Trying affinity for 192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.776388 env[1306]: 2025-11-01 04:18:37.638 [INFO][3636] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.776388 env[1306]: 2025-11-01 04:18:37.646 [INFO][3636] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.776388 env[1306]: 2025-11-01 04:18:37.646 [INFO][3636] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.64/26 handle="k8s-pod-network.8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.776388 env[1306]: 2025-11-01 04:18:37.651 [INFO][3636] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504 Nov 1 04:18:37.776388 env[1306]: 2025-11-01 04:18:37.662 [INFO][3636] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.64/26 handle="k8s-pod-network.8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.776388 env[1306]: 2025-11-01 04:18:37.677 [INFO][3636] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.66/26] block=192.168.47.64/26 handle="k8s-pod-network.8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.776388 env[1306]: 2025-11-01 04:18:37.677 [INFO][3636] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.66/26] handle="k8s-pod-network.8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.776388 env[1306]: 2025-11-01 04:18:37.677 [INFO][3636] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:37.776388 env[1306]: 2025-11-01 04:18:37.677 [INFO][3636] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.66/26] IPv6=[] ContainerID="8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504" HandleID="k8s-pod-network.8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0" Nov 1 04:18:37.777296 env[1306]: 2025-11-01 04:18:37.679 [INFO][3583] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504" Namespace="kube-system" Pod="coredns-668d6bf9bc-jpfzq" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5a34f1cb-7b62-4a76-8ec0-fae704ad3a6a", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 17, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-jpfzq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6a311739f5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:37.777296 env[1306]: 2025-11-01 04:18:37.679 [INFO][3583] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.66/32] ContainerID="8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504" Namespace="kube-system" Pod="coredns-668d6bf9bc-jpfzq" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0" Nov 1 04:18:37.777296 env[1306]: 2025-11-01 04:18:37.679 [INFO][3583] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6a311739f5 ContainerID="8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504" Namespace="kube-system" Pod="coredns-668d6bf9bc-jpfzq" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0" Nov 1 04:18:37.777296 env[1306]: 2025-11-01 04:18:37.718 [INFO][3583] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504" Namespace="kube-system" Pod="coredns-668d6bf9bc-jpfzq" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0" Nov 1 04:18:37.777296 env[1306]: 2025-11-01 04:18:37.718 [INFO][3583] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504" Namespace="kube-system" Pod="coredns-668d6bf9bc-jpfzq" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5a34f1cb-7b62-4a76-8ec0-fae704ad3a6a", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 17, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504", Pod:"coredns-668d6bf9bc-jpfzq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6a311739f5", MAC:"aa:2d:9b:16:7a:6f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:37.777296 env[1306]: 2025-11-01 04:18:37.774 [INFO][3583] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504" Namespace="kube-system" Pod="coredns-668d6bf9bc-jpfzq" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0" Nov 1 04:18:37.797542 env[1306]: time="2025-11-01T04:18:37.797459269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 04:18:37.797706 env[1306]: time="2025-11-01T04:18:37.797548563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 04:18:37.797706 env[1306]: time="2025-11-01T04:18:37.797572828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 04:18:37.800513 env[1306]: time="2025-11-01T04:18:37.800440343Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504 pid=3700 runtime=io.containerd.runc.v2 Nov 1 04:18:37.838253 env[1306]: time="2025-11-01T04:18:37.838205365Z" level=info msg="StopPodSandbox for \"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb\"" Nov 1 04:18:37.838739 env[1306]: time="2025-11-01T04:18:37.838196863Z" level=info msg="StopPodSandbox for \"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03\"" Nov 1 04:18:37.884639 systemd-networkd[1086]: calibf83a63cd89: Link UP Nov 1 04:18:37.898422 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibf83a63cd89: link becomes ready Nov 1 04:18:37.898158 systemd-networkd[1086]: calibf83a63cd89: Gained carrier Nov 1 04:18:37.955346 env[1306]: 2025-11-01 04:18:37.294 [INFO][3558] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 04:18:37.955346 env[1306]: 2025-11-01 04:18:37.325 [INFO][3558] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0 csi-node-driver- calico-system f7f98be6-9d36-4c44-bedf-cd179c76bbfe 897 0 2025-11-01 04:18:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-i9e8z.gb1.brightbox.com csi-node-driver-4ccfj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibf83a63cd89 [] [] }} ContainerID="bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe" Namespace="calico-system" Pod="csi-node-driver-4ccfj" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-" Nov 1 04:18:37.955346 env[1306]: 2025-11-01 04:18:37.325 [INFO][3558] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe" Namespace="calico-system" Pod="csi-node-driver-4ccfj" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0" Nov 1 04:18:37.955346 env[1306]: 2025-11-01 04:18:37.581 [INFO][3615] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe" HandleID="k8s-pod-network.bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe" Workload="srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0" Nov 1 04:18:37.955346 env[1306]: 2025-11-01 04:18:37.614 [INFO][3615] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe" HandleID="k8s-pod-network.bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe" Workload="srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000237ae0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-i9e8z.gb1.brightbox.com", "pod":"csi-node-driver-4ccfj", "timestamp":"2025-11-01 04:18:37.581082092 +0000 UTC"}, Hostname:"srv-i9e8z.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 04:18:37.955346 env[1306]: 2025-11-01 04:18:37.619 [INFO][3615] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:37.955346 env[1306]: 2025-11-01 04:18:37.677 [INFO][3615] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:37.955346 env[1306]: 2025-11-01 04:18:37.677 [INFO][3615] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i9e8z.gb1.brightbox.com' Nov 1 04:18:37.955346 env[1306]: 2025-11-01 04:18:37.729 [INFO][3615] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.955346 env[1306]: 2025-11-01 04:18:37.747 [INFO][3615] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.955346 env[1306]: 2025-11-01 04:18:37.770 [INFO][3615] ipam/ipam.go 511: Trying affinity for 192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.955346 env[1306]: 2025-11-01 04:18:37.793 [INFO][3615] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.955346 env[1306]: 2025-11-01 04:18:37.814 [INFO][3615] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.955346 env[1306]: 2025-11-01 04:18:37.814 [INFO][3615] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.64/26 handle="k8s-pod-network.bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.955346 env[1306]: 2025-11-01 04:18:37.823 [INFO][3615] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe Nov 1 04:18:37.955346 env[1306]: 2025-11-01 04:18:37.842 [INFO][3615] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.64/26 handle="k8s-pod-network.bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.955346 env[1306]: 2025-11-01 04:18:37.862 [INFO][3615] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.67/26] block=192.168.47.64/26 handle="k8s-pod-network.bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.955346 env[1306]: 2025-11-01 04:18:37.862 [INFO][3615] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.67/26] handle="k8s-pod-network.bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:37.955346 env[1306]: 2025-11-01 04:18:37.862 [INFO][3615] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:37.955346 env[1306]: 2025-11-01 04:18:37.862 [INFO][3615] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.67/26] IPv6=[] ContainerID="bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe" HandleID="k8s-pod-network.bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe" Workload="srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0" Nov 1 04:18:37.961944 env[1306]: 2025-11-01 04:18:37.878 [INFO][3558] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe" Namespace="calico-system" Pod="csi-node-driver-4ccfj" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f7f98be6-9d36-4c44-bedf-cd179c76bbfe", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-4ccfj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibf83a63cd89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:37.961944 env[1306]: 2025-11-01 04:18:37.878 [INFO][3558] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.67/32] ContainerID="bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe" Namespace="calico-system" Pod="csi-node-driver-4ccfj" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0" Nov 1 04:18:37.961944 env[1306]: 2025-11-01 04:18:37.878 [INFO][3558] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf83a63cd89 ContainerID="bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe" Namespace="calico-system" Pod="csi-node-driver-4ccfj" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0" Nov 1 04:18:37.961944 env[1306]: 2025-11-01 04:18:37.906 [INFO][3558] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe" Namespace="calico-system" Pod="csi-node-driver-4ccfj" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0" Nov 1 04:18:37.961944 env[1306]: 2025-11-01 04:18:37.907 [INFO][3558] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe" Namespace="calico-system" Pod="csi-node-driver-4ccfj" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f7f98be6-9d36-4c44-bedf-cd179c76bbfe", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe", Pod:"csi-node-driver-4ccfj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibf83a63cd89", MAC:"ba:e1:31:6b:93:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:37.961944 env[1306]: 2025-11-01 04:18:37.939 [INFO][3558] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe" Namespace="calico-system" Pod="csi-node-driver-4ccfj" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0" Nov 1 04:18:37.961944 env[1306]: time="2025-11-01T04:18:37.959445802Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:18:37.961944 env[1306]: time="2025-11-01T04:18:37.960818326Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 04:18:37.963282 kubelet[2178]: E1101 04:18:37.961159 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 04:18:37.963282 kubelet[2178]: E1101 04:18:37.963031 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 04:18:37.985977 kubelet[2178]: E1101 04:18:37.985903 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b3e36ba512a94ba9aab826249dfe86b5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gzqrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-585f46d5f6-8lv48_calico-system(10fb6fc1-31fc-4d32-a2e8-032e174f09df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 04:18:37.992572 env[1306]: time="2025-11-01T04:18:37.992525874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 04:18:38.055625 env[1306]: time="2025-11-01T04:18:38.055121843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jpfzq,Uid:5a34f1cb-7b62-4a76-8ec0-fae704ad3a6a,Namespace:kube-system,Attempt:1,} returns sandbox id \"8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504\"" Nov 1 04:18:38.060351 env[1306]: time="2025-11-01T04:18:38.060200187Z" level=info msg="CreateContainer within sandbox \"8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit: BPF prog-id=10 op=LOAD Nov 1 04:18:38.062000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdc6644ef0 a2=98 a3=1fffffffffffffff items=0 ppid=3499 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.062000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 04:18:38.062000 audit: BPF prog-id=10 op=UNLOAD Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit: BPF prog-id=11 op=LOAD Nov 1 04:18:38.062000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdc6644dd0 a2=94 a3=3 items=0 ppid=3499 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.062000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 04:18:38.062000 audit: BPF prog-id=11 op=UNLOAD Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { bpf } for pid=3785 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit: BPF prog-id=12 op=LOAD Nov 1 04:18:38.062000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdc6644e10 a2=94 a3=7ffdc6644ff0 items=0 ppid=3499 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.062000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 04:18:38.062000 audit: BPF prog-id=12 op=UNLOAD Nov 1 04:18:38.062000 audit[3785]: AVC avc: denied { perfmon } for pid=3785 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.062000 audit[3785]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffdc6644ee0 a2=50 a3=a000000085 items=0 ppid=3499 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.062000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Nov 1 04:18:38.065000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.065000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.065000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.065000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.065000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.065000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.065000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.065000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.065000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.065000 audit: BPF prog-id=13 op=LOAD Nov 1 04:18:38.065000 audit[3787]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff446a0af0 a2=98 a3=3 items=0 ppid=3499 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.065000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 04:18:38.068000 audit: BPF prog-id=13 op=UNLOAD Nov 1 04:18:38.069000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.069000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.069000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.069000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.069000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.069000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.069000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.069000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.069000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.069000 audit: BPF prog-id=14 op=LOAD Nov 1 04:18:38.069000 audit[3787]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff446a08e0 a2=94 a3=54428f items=0 ppid=3499 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.069000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 04:18:38.069000 audit: BPF prog-id=14 op=UNLOAD Nov 1 04:18:38.069000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.069000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.069000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.069000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.069000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.069000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.069000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.069000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.069000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.069000 audit: BPF prog-id=15 op=LOAD Nov 1 04:18:38.069000 audit[3787]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff446a0910 a2=94 a3=2 items=0 ppid=3499 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.069000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 04:18:38.069000 audit: BPF prog-id=15 op=UNLOAD Nov 1 04:18:38.083112 env[1306]: time="2025-11-01T04:18:38.083069586Z" level=info msg="CreateContainer within sandbox \"8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"94ca5bfadb80759cd07fb3e01a95cca43f1c90aaa1e5495e64b16a9dc113fea7\"" Nov 1 04:18:38.089450 env[1306]: time="2025-11-01T04:18:38.088489408Z" level=info msg="StartContainer for \"94ca5bfadb80759cd07fb3e01a95cca43f1c90aaa1e5495e64b16a9dc113fea7\"" Nov 1 04:18:38.115532 env[1306]: time="2025-11-01T04:18:38.094707670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 04:18:38.115532 env[1306]: time="2025-11-01T04:18:38.094771415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 04:18:38.115532 env[1306]: time="2025-11-01T04:18:38.094783307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 04:18:38.115532 env[1306]: time="2025-11-01T04:18:38.095061504Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe pid=3780 runtime=io.containerd.runc.v2 Nov 1 04:18:38.175285 systemd-networkd[1086]: calif7062d948bb: Gained IPv6LL Nov 1 04:18:38.239502 systemd[1]: run-containerd-runc-k8s.io-bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe-runc.rzIK2G.mount: Deactivated successfully. Nov 1 04:18:38.318103 env[1306]: time="2025-11-01T04:18:38.318046123Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:18:38.320917 env[1306]: time="2025-11-01T04:18:38.318664938Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 04:18:38.321066 kubelet[2178]: E1101 04:18:38.318919 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 04:18:38.321066 kubelet[2178]: E1101 04:18:38.318973 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 04:18:38.321279 kubelet[2178]: E1101 04:18:38.319098 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gzqrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-585f46d5f6-8lv48_calico-system(10fb6fc1-31fc-4d32-a2e8-032e174f09df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 04:18:38.325835 kubelet[2178]: E1101 04:18:38.324255 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585f46d5f6-8lv48" podUID="10fb6fc1-31fc-4d32-a2e8-032e174f09df" Nov 1 04:18:38.344019 env[1306]: 2025-11-01 04:18:38.087 [INFO][3744] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" Nov 1 04:18:38.344019 env[1306]: 2025-11-01 04:18:38.087 [INFO][3744] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" iface="eth0" netns="/var/run/netns/cni-4eb70a5d-9464-a957-3fec-b92871a3c7d4" Nov 1 04:18:38.344019 env[1306]: 2025-11-01 04:18:38.088 [INFO][3744] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" iface="eth0" netns="/var/run/netns/cni-4eb70a5d-9464-a957-3fec-b92871a3c7d4" Nov 1 04:18:38.344019 env[1306]: 2025-11-01 04:18:38.089 [INFO][3744] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" iface="eth0" netns="/var/run/netns/cni-4eb70a5d-9464-a957-3fec-b92871a3c7d4" Nov 1 04:18:38.344019 env[1306]: 2025-11-01 04:18:38.089 [INFO][3744] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" Nov 1 04:18:38.344019 env[1306]: 2025-11-01 04:18:38.089 [INFO][3744] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" Nov 1 04:18:38.344019 env[1306]: 2025-11-01 04:18:38.297 [INFO][3798] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" HandleID="k8s-pod-network.eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0" Nov 1 04:18:38.344019 env[1306]: 2025-11-01 04:18:38.299 [INFO][3798] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:38.344019 env[1306]: 2025-11-01 04:18:38.300 [INFO][3798] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:38.344019 env[1306]: 2025-11-01 04:18:38.311 [WARNING][3798] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" HandleID="k8s-pod-network.eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0" Nov 1 04:18:38.344019 env[1306]: 2025-11-01 04:18:38.311 [INFO][3798] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" HandleID="k8s-pod-network.eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0" Nov 1 04:18:38.344019 env[1306]: 2025-11-01 04:18:38.313 [INFO][3798] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:38.344019 env[1306]: 2025-11-01 04:18:38.334 [INFO][3744] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" Nov 1 04:18:38.344019 env[1306]: time="2025-11-01T04:18:38.336847526Z" level=info msg="TearDown network for sandbox \"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb\" successfully" Nov 1 04:18:38.344019 env[1306]: time="2025-11-01T04:18:38.336896556Z" level=info msg="StopPodSandbox for \"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb\" returns successfully" Nov 1 04:18:38.344019 env[1306]: time="2025-11-01T04:18:38.337603758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cvw4c,Uid:f6255d76-4d91-405c-b114-1f4e921c4b8b,Namespace:kube-system,Attempt:1,}" Nov 1 04:18:38.394803 env[1306]: 2025-11-01 04:18:38.145 [INFO][3748] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" Nov 1 04:18:38.394803 env[1306]: 2025-11-01 04:18:38.145 [INFO][3748] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" iface="eth0" netns="/var/run/netns/cni-1c2b9bfe-86e6-ef4b-701b-77adbfb2431d" Nov 1 04:18:38.394803 env[1306]: 2025-11-01 04:18:38.153 [INFO][3748] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" iface="eth0" netns="/var/run/netns/cni-1c2b9bfe-86e6-ef4b-701b-77adbfb2431d" Nov 1 04:18:38.394803 env[1306]: 2025-11-01 04:18:38.157 [INFO][3748] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" iface="eth0" netns="/var/run/netns/cni-1c2b9bfe-86e6-ef4b-701b-77adbfb2431d" Nov 1 04:18:38.394803 env[1306]: 2025-11-01 04:18:38.157 [INFO][3748] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" Nov 1 04:18:38.394803 env[1306]: 2025-11-01 04:18:38.157 [INFO][3748] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" Nov 1 04:18:38.394803 env[1306]: 2025-11-01 04:18:38.369 [INFO][3816] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" HandleID="k8s-pod-network.8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" Workload="srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0" Nov 1 04:18:38.394803 env[1306]: 2025-11-01 04:18:38.370 [INFO][3816] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:38.394803 env[1306]: 2025-11-01 04:18:38.370 [INFO][3816] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:38.394803 env[1306]: 2025-11-01 04:18:38.383 [WARNING][3816] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" HandleID="k8s-pod-network.8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" Workload="srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0" Nov 1 04:18:38.394803 env[1306]: 2025-11-01 04:18:38.383 [INFO][3816] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" HandleID="k8s-pod-network.8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" Workload="srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0" Nov 1 04:18:38.394803 env[1306]: 2025-11-01 04:18:38.386 [INFO][3816] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:38.394803 env[1306]: 2025-11-01 04:18:38.390 [INFO][3748] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" Nov 1 04:18:38.395329 env[1306]: time="2025-11-01T04:18:38.394932055Z" level=info msg="TearDown network for sandbox \"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03\" successfully" Nov 1 04:18:38.395329 env[1306]: time="2025-11-01T04:18:38.394969140Z" level=info msg="StopPodSandbox for \"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03\" returns successfully" Nov 1 04:18:38.397647 env[1306]: time="2025-11-01T04:18:38.397611971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7nddq,Uid:09a6446d-f1c6-40ae-8ffc-711e84b66ed9,Namespace:calico-system,Attempt:1,}" Nov 1 04:18:38.418661 env[1306]: time="2025-11-01T04:18:38.418619768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4ccfj,Uid:f7f98be6-9d36-4c44-bedf-cd179c76bbfe,Namespace:calico-system,Attempt:1,} returns sandbox id \"bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe\"" Nov 1 04:18:38.418000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.418000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.418000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.418000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.418000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.418000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.418000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.418000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.418000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.418000 audit: BPF prog-id=16 op=LOAD Nov 1 04:18:38.420273 env[1306]: time="2025-11-01T04:18:38.420241925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 04:18:38.418000 audit[3787]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff446a07d0 a2=94 a3=1 items=0 ppid=3499 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.418000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 04:18:38.420000 audit: BPF prog-id=16 op=UNLOAD Nov 1 04:18:38.421000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.421000 audit[3787]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff446a08a0 a2=50 a3=7fff446a0980 items=0 ppid=3499 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.421000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 04:18:38.434000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.434000 audit[3787]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff446a07e0 a2=28 a3=0 items=0 ppid=3499 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.434000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 04:18:38.434000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.434000 audit[3787]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff446a0810 a2=28 a3=0 items=0 ppid=3499 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.434000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 04:18:38.435000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.435000 audit[3787]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff446a0720 a2=28 a3=0 items=0 ppid=3499 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.435000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 04:18:38.435000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.435000 audit[3787]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff446a0830 a2=28 a3=0 items=0 ppid=3499 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.435000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 04:18:38.435000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.435000 audit[3787]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff446a0810 a2=28 a3=0 items=0 ppid=3499 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.435000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 04:18:38.435000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.435000 audit[3787]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff446a0800 a2=28 a3=0 items=0 ppid=3499 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.435000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 04:18:38.435000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.435000 audit[3787]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff446a0830 a2=28 a3=0 items=0 ppid=3499 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.435000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 04:18:38.435000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.435000 audit[3787]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff446a0810 a2=28 a3=0 items=0 ppid=3499 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.435000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 04:18:38.435000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.435000 audit[3787]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff446a0830 a2=28 a3=0 items=0 ppid=3499 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.435000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 04:18:38.435000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.435000 audit[3787]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff446a0800 a2=28 a3=0 items=0 ppid=3499 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.435000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 04:18:38.435000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.435000 audit[3787]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff446a0870 a2=28 a3=0 items=0 ppid=3499 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.435000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 04:18:38.435000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.435000 audit[3787]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff446a0620 a2=50 a3=1 items=0 ppid=3499 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.435000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 04:18:38.435000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.435000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.435000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.435000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.435000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.435000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.435000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.435000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.435000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.435000 audit: BPF prog-id=17 op=LOAD Nov 1 04:18:38.435000 audit[3787]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff446a0620 a2=94 a3=5 items=0 ppid=3499 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.435000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 04:18:38.436000 audit: BPF prog-id=17 op=UNLOAD Nov 1 04:18:38.436000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.436000 audit[3787]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff446a06d0 a2=50 a3=1 items=0 ppid=3499 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.436000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 04:18:38.436000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.436000 audit[3787]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fff446a07f0 a2=4 a3=38 items=0 ppid=3499 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.436000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 04:18:38.436000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.436000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.436000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.436000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.436000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.436000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.436000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.436000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.436000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.436000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.436000 audit[3787]: AVC avc: denied { confidentiality } for pid=3787 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 04:18:38.436000 audit[3787]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff446a0840 a2=94 a3=6 items=0 ppid=3499 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.436000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 04:18:38.437000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.437000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.437000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.437000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.437000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.437000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.437000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.437000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.437000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.437000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.437000 audit[3787]: AVC avc: denied { confidentiality } for pid=3787 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 04:18:38.437000 audit[3787]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff4469fff0 a2=94 a3=88 items=0 ppid=3499 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.437000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 04:18:38.439000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.439000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.439000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.439000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.439000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.439000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.439000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.439000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.439000 audit[3787]: AVC avc: denied { perfmon } for pid=3787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.439000 audit[3787]: AVC avc: denied { bpf } for pid=3787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.439000 audit[3787]: AVC avc: denied { confidentiality } for pid=3787 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 04:18:38.439000 audit[3787]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff4469fff0 a2=94 a3=88 items=0 ppid=3499 pid=3787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.439000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Nov 1 04:18:38.453866 env[1306]: time="2025-11-01T04:18:38.451709582Z" level=info msg="StartContainer for \"94ca5bfadb80759cd07fb3e01a95cca43f1c90aaa1e5495e64b16a9dc113fea7\" returns successfully" Nov 1 04:18:38.456000 audit[3886]: AVC avc: denied { bpf } for pid=3886 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.456000 audit[3886]: AVC avc: denied { bpf } for pid=3886 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.456000 audit[3886]: AVC avc: denied { perfmon } for pid=3886 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.456000 audit[3886]: AVC avc: denied { perfmon } for pid=3886 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.456000 audit[3886]: AVC avc: denied { perfmon } for pid=3886 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.456000 audit[3886]: AVC avc: denied { perfmon } for pid=3886 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.456000 audit[3886]: AVC avc: denied { perfmon } for pid=3886 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.456000 audit[3886]: AVC avc: denied { bpf } for pid=3886 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.456000 audit[3886]: AVC avc: denied { bpf } for pid=3886 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.456000 audit: BPF prog-id=18 op=LOAD Nov 1 04:18:38.456000 audit[3886]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee38c2d70 a2=98 a3=1999999999999999 items=0 ppid=3499 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.456000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 04:18:38.457000 audit: BPF prog-id=18 op=UNLOAD Nov 1 04:18:38.457000 audit[3886]: AVC avc: denied { bpf } for pid=3886 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.457000 audit[3886]: AVC avc: denied { bpf } for pid=3886 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.457000 audit[3886]: AVC avc: denied { perfmon } for pid=3886 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.457000 audit[3886]: AVC avc: denied { perfmon } for pid=3886 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.457000 audit[3886]: AVC avc: denied { perfmon } for pid=3886 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.457000 audit[3886]: AVC avc: denied { perfmon } for pid=3886 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.457000 audit[3886]: AVC avc: denied { perfmon } for pid=3886 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.457000 audit[3886]: AVC avc: denied { bpf } for pid=3886 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.457000 audit[3886]: AVC avc: denied { bpf } for pid=3886 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.457000 audit: BPF prog-id=19 op=LOAD Nov 1 04:18:38.457000 audit[3886]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee38c2c50 a2=94 a3=ffff items=0 ppid=3499 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.457000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 04:18:38.458000 audit: BPF prog-id=19 op=UNLOAD Nov 1 04:18:38.458000 audit[3886]: AVC avc: denied { bpf } for pid=3886 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.458000 audit[3886]: AVC avc: denied { bpf } for pid=3886 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.458000 audit[3886]: AVC avc: denied { perfmon } for pid=3886 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.458000 audit[3886]: AVC avc: denied { perfmon } for pid=3886 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.458000 audit[3886]: AVC avc: denied { perfmon } for pid=3886 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.458000 audit[3886]: AVC avc: denied { perfmon } for pid=3886 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.458000 audit[3886]: AVC avc: denied { perfmon } for pid=3886 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.458000 audit[3886]: AVC avc: denied { bpf } for pid=3886 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.458000 audit[3886]: AVC avc: denied { bpf } for pid=3886 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.458000 audit: BPF prog-id=20 op=LOAD Nov 1 04:18:38.458000 audit[3886]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffee38c2c90 a2=94 a3=7ffee38c2e70 items=0 ppid=3499 pid=3886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.458000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Nov 1 04:18:38.458000 audit: BPF prog-id=20 op=UNLOAD Nov 1 04:18:38.656113 systemd-networkd[1086]: vxlan.calico: Link UP Nov 1 04:18:38.656122 systemd-networkd[1086]: vxlan.calico: Gained carrier Nov 1 04:18:38.704000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.704000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.704000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.704000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.704000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.704000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.704000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.704000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.704000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.704000 audit: BPF prog-id=21 op=LOAD Nov 1 04:18:38.704000 audit[3931]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec41b01d0 a2=98 a3=0 items=0 ppid=3499 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.704000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 04:18:38.704000 audit: BPF prog-id=21 op=UNLOAD Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit: BPF prog-id=22 op=LOAD Nov 1 04:18:38.705000 audit[3931]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec41affe0 a2=94 a3=54428f items=0 ppid=3499 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 04:18:38.705000 audit: BPF prog-id=22 op=UNLOAD Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit: BPF prog-id=23 op=LOAD Nov 1 04:18:38.705000 audit[3931]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffec41b0010 a2=94 a3=2 items=0 ppid=3499 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 04:18:38.705000 audit: BPF prog-id=23 op=UNLOAD Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffec41afee0 a2=28 a3=0 items=0 ppid=3499 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffec41aff10 a2=28 a3=0 items=0 ppid=3499 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffec41afe20 a2=28 a3=0 items=0 ppid=3499 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffec41aff30 a2=28 a3=0 items=0 ppid=3499 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffec41aff10 a2=28 a3=0 items=0 ppid=3499 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffec41aff00 a2=28 a3=0 items=0 ppid=3499 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffec41aff30 a2=28 a3=0 items=0 ppid=3499 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffec41aff10 a2=28 a3=0 items=0 ppid=3499 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffec41aff30 a2=28 a3=0 items=0 ppid=3499 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffec41aff00 a2=28 a3=0 items=0 ppid=3499 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffec41aff70 a2=28 a3=0 items=0 ppid=3499 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.705000 audit: BPF prog-id=24 op=LOAD Nov 1 04:18:38.705000 audit[3931]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffec41afde0 a2=94 a3=0 items=0 ppid=3499 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.705000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 04:18:38.705000 audit: BPF prog-id=24 op=UNLOAD Nov 1 04:18:38.706000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.706000 audit[3931]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffec41afdd0 a2=50 a3=2800 items=0 ppid=3499 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.706000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 04:18:38.706000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.706000 audit[3931]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffec41afdd0 a2=50 a3=2800 items=0 ppid=3499 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.706000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 04:18:38.706000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.706000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.706000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.706000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.706000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.706000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.706000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.706000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.706000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.706000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.706000 audit: BPF prog-id=25 op=LOAD Nov 1 04:18:38.706000 audit[3931]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffec41af5f0 a2=94 a3=2 items=0 ppid=3499 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.706000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 04:18:38.707000 audit: BPF prog-id=25 op=UNLOAD Nov 1 04:18:38.707000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.707000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.707000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.707000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.707000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.707000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.707000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.707000 audit[3931]: AVC avc: denied { perfmon } for pid=3931 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.707000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.707000 audit[3931]: AVC avc: denied { bpf } for pid=3931 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.707000 audit: BPF prog-id=26 op=LOAD Nov 1 04:18:38.707000 audit[3931]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffec41af6f0 a2=94 a3=30 items=0 ppid=3499 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.707000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Nov 1 04:18:38.714000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.714000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.714000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.714000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.714000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.714000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.714000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.714000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.714000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.714000 audit: BPF prog-id=27 op=LOAD Nov 1 04:18:38.714000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe6cb5c190 a2=98 a3=0 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.714000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:38.715000 audit: BPF prog-id=27 op=UNLOAD Nov 1 04:18:38.715000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.715000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.715000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.715000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.715000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.715000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.715000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.715000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.715000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.715000 audit: BPF prog-id=28 op=LOAD Nov 1 04:18:38.715000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe6cb5bf80 a2=94 a3=54428f items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.715000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:38.716000 audit: BPF prog-id=28 op=UNLOAD Nov 1 04:18:38.716000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.716000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.716000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.716000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.716000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.716000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.716000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.716000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.716000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.716000 audit: BPF prog-id=29 op=LOAD Nov 1 04:18:38.716000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe6cb5bfb0 a2=94 a3=2 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.716000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:38.718000 audit: BPF prog-id=29 op=UNLOAD Nov 1 04:18:38.743271 env[1306]: time="2025-11-01T04:18:38.726535479Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:18:38.743271 env[1306]: time="2025-11-01T04:18:38.728301880Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 04:18:38.743271 env[1306]: time="2025-11-01T04:18:38.732055278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 04:18:38.743451 kubelet[2178]: E1101 04:18:38.728604 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 04:18:38.743451 kubelet[2178]: E1101 04:18:38.728675 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 04:18:38.743451 kubelet[2178]: E1101 04:18:38.729074 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fwflm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4ccfj_calico-system(f7f98be6-9d36-4c44-bedf-cd179c76bbfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 04:18:38.850470 env[1306]: time="2025-11-01T04:18:38.850427530Z" level=info msg="StopPodSandbox for \"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a\"" Nov 1 04:18:38.928163 systemd-networkd[1086]: cali86caf317204: Link UP Nov 1 04:18:38.945404 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali86caf317204: link becomes ready Nov 1 04:18:38.945679 systemd-networkd[1086]: cali86caf317204: Gained carrier Nov 1 04:18:38.980000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.980000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.980000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.980000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.980000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.980000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.980000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.980000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.980000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.980000 audit: BPF prog-id=30 op=LOAD Nov 1 04:18:38.980000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe6cb5be70 a2=94 a3=1 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.980000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:38.980000 audit: BPF prog-id=30 op=UNLOAD Nov 1 04:18:38.980000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:38.980000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe6cb5bf40 a2=50 a3=7ffe6cb5c020 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:38.980000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:38.992954 env[1306]: 2025-11-01 04:18:38.538 [INFO][3850] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0 coredns-668d6bf9bc- kube-system f6255d76-4d91-405c-b114-1f4e921c4b8b 915 0 2025-11-01 04:17:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-i9e8z.gb1.brightbox.com coredns-668d6bf9bc-cvw4c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali86caf317204 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60" Namespace="kube-system" Pod="coredns-668d6bf9bc-cvw4c" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-" Nov 1 04:18:38.992954 env[1306]: 2025-11-01 04:18:38.538 [INFO][3850] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60" Namespace="kube-system" Pod="coredns-668d6bf9bc-cvw4c" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0" Nov 1 04:18:38.992954 env[1306]: 2025-11-01 04:18:38.767 [INFO][3907] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60" HandleID="k8s-pod-network.fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0" Nov 1 04:18:38.992954 env[1306]: 2025-11-01 04:18:38.769 [INFO][3907] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60" HandleID="k8s-pod-network.fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000393250), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-i9e8z.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-cvw4c", "timestamp":"2025-11-01 04:18:38.767990257 +0000 UTC"}, Hostname:"srv-i9e8z.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 04:18:38.992954 env[1306]: 2025-11-01 04:18:38.770 [INFO][3907] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:38.992954 env[1306]: 2025-11-01 04:18:38.770 [INFO][3907] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:38.992954 env[1306]: 2025-11-01 04:18:38.770 [INFO][3907] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i9e8z.gb1.brightbox.com' Nov 1 04:18:38.992954 env[1306]: 2025-11-01 04:18:38.837 [INFO][3907] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:38.992954 env[1306]: 2025-11-01 04:18:38.846 [INFO][3907] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:38.992954 env[1306]: 2025-11-01 04:18:38.852 [INFO][3907] ipam/ipam.go 511: Trying affinity for 192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:38.992954 env[1306]: 2025-11-01 04:18:38.856 [INFO][3907] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:38.992954 env[1306]: 2025-11-01 04:18:38.861 [INFO][3907] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:38.992954 env[1306]: 2025-11-01 04:18:38.861 [INFO][3907] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.64/26 handle="k8s-pod-network.fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:38.992954 env[1306]: 2025-11-01 04:18:38.864 [INFO][3907] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60 Nov 1 04:18:38.992954 env[1306]: 2025-11-01 04:18:38.872 [INFO][3907] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.64/26 handle="k8s-pod-network.fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:38.992954 env[1306]: 2025-11-01 04:18:38.883 [INFO][3907] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.68/26] block=192.168.47.64/26 handle="k8s-pod-network.fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:38.992954 env[1306]: 2025-11-01 04:18:38.884 [INFO][3907] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.68/26] handle="k8s-pod-network.fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:38.992954 env[1306]: 2025-11-01 04:18:38.884 [INFO][3907] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:38.992954 env[1306]: 2025-11-01 04:18:38.884 [INFO][3907] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.68/26] IPv6=[] ContainerID="fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60" HandleID="k8s-pod-network.fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0" Nov 1 04:18:38.993851 env[1306]: 2025-11-01 04:18:38.911 [INFO][3850] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60" Namespace="kube-system" Pod="coredns-668d6bf9bc-cvw4c" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f6255d76-4d91-405c-b114-1f4e921c4b8b", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 17, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-cvw4c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86caf317204", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:38.993851 env[1306]: 2025-11-01 04:18:38.911 [INFO][3850] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.68/32] ContainerID="fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60" Namespace="kube-system" Pod="coredns-668d6bf9bc-cvw4c" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0" Nov 1 04:18:38.993851 env[1306]: 2025-11-01 04:18:38.911 [INFO][3850] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86caf317204 ContainerID="fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60" Namespace="kube-system" Pod="coredns-668d6bf9bc-cvw4c" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0" Nov 1 04:18:38.993851 env[1306]: 2025-11-01 04:18:38.960 [INFO][3850] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60" Namespace="kube-system" Pod="coredns-668d6bf9bc-cvw4c" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0" Nov 1 04:18:38.993851 env[1306]: 2025-11-01 04:18:38.961 [INFO][3850] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60" Namespace="kube-system" Pod="coredns-668d6bf9bc-cvw4c" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f6255d76-4d91-405c-b114-1f4e921c4b8b", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 17, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60", Pod:"coredns-668d6bf9bc-cvw4c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86caf317204", MAC:"06:e4:a5:dd:b3:3f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:38.993851 env[1306]: 2025-11-01 04:18:38.972 [INFO][3850] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60" Namespace="kube-system" Pod="coredns-668d6bf9bc-cvw4c" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0" Nov 1 04:18:39.004000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.004000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe6cb5be80 a2=28 a3=0 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.004000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:39.004000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.004000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe6cb5beb0 a2=28 a3=0 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.004000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:39.004000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.004000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe6cb5bdc0 a2=28 a3=0 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.004000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:39.004000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.004000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe6cb5bed0 a2=28 a3=0 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.004000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:39.004000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.004000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe6cb5beb0 a2=28 a3=0 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.004000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:39.004000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.004000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe6cb5bea0 a2=28 a3=0 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.004000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:39.004000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.004000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe6cb5bed0 a2=28 a3=0 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.004000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:39.004000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.004000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe6cb5beb0 a2=28 a3=0 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.004000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:39.004000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.004000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe6cb5bed0 a2=28 a3=0 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.004000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:39.004000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.004000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe6cb5bea0 a2=28 a3=0 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.004000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:39.004000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.004000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe6cb5bf10 a2=28 a3=0 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.004000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:39.004000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.004000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe6cb5bcc0 a2=50 a3=1 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.004000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:39.004000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.004000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.004000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.004000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.004000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.004000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.004000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.004000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.004000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.004000 audit: BPF prog-id=31 op=LOAD Nov 1 04:18:39.004000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe6cb5bcc0 a2=94 a3=5 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.004000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:39.004000 audit: BPF prog-id=31 op=UNLOAD Nov 1 04:18:39.004000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.004000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe6cb5bd70 a2=50 a3=1 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.004000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:39.005000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.005000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe6cb5be90 a2=4 a3=38 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.005000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:39.005000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.005000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.005000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.005000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.005000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.005000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.005000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.005000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.005000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.005000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.005000 audit[3934]: AVC avc: denied { confidentiality } for pid=3934 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 04:18:39.005000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe6cb5bee0 a2=94 a3=6 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.005000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:39.006000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.006000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.006000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.006000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.006000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.006000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.006000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.006000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.006000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.006000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.006000 audit[3934]: AVC avc: denied { confidentiality } for pid=3934 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 04:18:39.006000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe6cb5b690 a2=94 a3=88 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.006000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:39.006000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.006000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.006000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.006000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.006000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.006000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.006000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.006000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.006000 audit[3934]: AVC avc: denied { perfmon } for pid=3934 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.006000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.006000 audit[3934]: AVC avc: denied { confidentiality } for pid=3934 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Nov 1 04:18:39.006000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe6cb5b690 a2=94 a3=88 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.006000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:39.006000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.006000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe6cb5d0c0 a2=10 a3=f8f00800 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.006000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:39.007000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.007000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe6cb5cf60 a2=10 a3=3 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.007000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:39.007000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.007000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe6cb5cf00 a2=10 a3=3 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.007000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:39.007000 audit[3934]: AVC avc: denied { bpf } for pid=3934 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Nov 1 04:18:39.007000 audit[3934]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe6cb5cf00 a2=10 a3=7 items=0 ppid=3499 pid=3934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.007000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Nov 1 04:18:39.036646 systemd-networkd[1086]: cali7c5fb42d33a: Link UP Nov 1 04:18:39.038916 systemd-networkd[1086]: cali7c5fb42d33a: Gained carrier Nov 1 04:18:39.039338 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7c5fb42d33a: link becomes ready Nov 1 04:18:39.053695 env[1306]: time="2025-11-01T04:18:39.047235504Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:18:39.053695 env[1306]: time="2025-11-01T04:18:39.047905852Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 04:18:39.054076 kubelet[2178]: E1101 04:18:39.048164 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 04:18:39.054076 kubelet[2178]: E1101 04:18:39.048235 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 04:18:39.054076 kubelet[2178]: E1101 04:18:39.048414 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fwflm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4ccfj_calico-system(f7f98be6-9d36-4c44-bedf-cd179c76bbfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 04:18:39.054076 kubelet[2178]: E1101 04:18:39.049773 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4ccfj" podUID="f7f98be6-9d36-4c44-bedf-cd179c76bbfe" Nov 1 04:18:39.054927 env[1306]: time="2025-11-01T04:18:39.054845145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 04:18:39.054927 env[1306]: time="2025-11-01T04:18:39.054899482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 04:18:39.054927 env[1306]: time="2025-11-01T04:18:39.054912080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 04:18:39.055762 env[1306]: time="2025-11-01T04:18:39.055712637Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60 pid=3989 runtime=io.containerd.runc.v2 Nov 1 04:18:39.070000 audit: BPF prog-id=26 op=UNLOAD Nov 1 04:18:39.077700 env[1306]: 2025-11-01 04:18:39.003 [INFO][3961] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" Nov 1 04:18:39.077700 env[1306]: 2025-11-01 04:18:39.003 [INFO][3961] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" iface="eth0" netns="/var/run/netns/cni-4709ac7c-4dea-8f8d-7952-e7f9ae729ec6" Nov 1 04:18:39.077700 env[1306]: 2025-11-01 04:18:39.003 [INFO][3961] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" iface="eth0" netns="/var/run/netns/cni-4709ac7c-4dea-8f8d-7952-e7f9ae729ec6" Nov 1 04:18:39.077700 env[1306]: 2025-11-01 04:18:39.004 [INFO][3961] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" iface="eth0" netns="/var/run/netns/cni-4709ac7c-4dea-8f8d-7952-e7f9ae729ec6" Nov 1 04:18:39.077700 env[1306]: 2025-11-01 04:18:39.004 [INFO][3961] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" Nov 1 04:18:39.077700 env[1306]: 2025-11-01 04:18:39.004 [INFO][3961] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" Nov 1 04:18:39.077700 env[1306]: 2025-11-01 04:18:39.061 [INFO][3975] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" HandleID="k8s-pod-network.c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0" Nov 1 04:18:39.077700 env[1306]: 2025-11-01 04:18:39.062 [INFO][3975] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:39.077700 env[1306]: 2025-11-01 04:18:39.062 [INFO][3975] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:39.077700 env[1306]: 2025-11-01 04:18:39.070 [WARNING][3975] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" HandleID="k8s-pod-network.c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0" Nov 1 04:18:39.077700 env[1306]: 2025-11-01 04:18:39.070 [INFO][3975] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" HandleID="k8s-pod-network.c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0" Nov 1 04:18:39.077700 env[1306]: 2025-11-01 04:18:39.071 [INFO][3975] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:39.077700 env[1306]: 2025-11-01 04:18:39.073 [INFO][3961] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" Nov 1 04:18:39.077700 env[1306]: time="2025-11-01T04:18:39.077698956Z" level=info msg="TearDown network for sandbox \"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a\" successfully" Nov 1 04:18:39.078503 env[1306]: time="2025-11-01T04:18:39.077728074Z" level=info msg="StopPodSandbox for \"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a\" returns successfully" Nov 1 04:18:39.080100 env[1306]: time="2025-11-01T04:18:39.078646209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dbfc7dd7f-64jld,Uid:cae162c8-7a97-49e7-94b4-82cbdda1df19,Namespace:calico-apiserver,Attempt:1,}" Nov 1 04:18:39.081703 env[1306]: 2025-11-01 04:18:38.610 [INFO][3870] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0 goldmane-666569f655- calico-system 09a6446d-f1c6-40ae-8ffc-711e84b66ed9 917 0 2025-11-01 04:18:10 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-i9e8z.gb1.brightbox.com goldmane-666569f655-7nddq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7c5fb42d33a [] [] }} ContainerID="88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed" Namespace="calico-system" Pod="goldmane-666569f655-7nddq" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-" Nov 1 04:18:39.081703 env[1306]: 2025-11-01 04:18:38.611 [INFO][3870] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed" Namespace="calico-system" Pod="goldmane-666569f655-7nddq" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0" Nov 1 04:18:39.081703 env[1306]: 2025-11-01 04:18:38.830 [INFO][3917] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed" HandleID="k8s-pod-network.88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed" Workload="srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0" Nov 1 04:18:39.081703 env[1306]: 2025-11-01 04:18:38.830 [INFO][3917] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed" HandleID="k8s-pod-network.88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed" Workload="srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fd00), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-i9e8z.gb1.brightbox.com", "pod":"goldmane-666569f655-7nddq", "timestamp":"2025-11-01 04:18:38.83059698 +0000 UTC"}, Hostname:"srv-i9e8z.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 04:18:39.081703 env[1306]: 2025-11-01 04:18:38.830 [INFO][3917] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:39.081703 env[1306]: 2025-11-01 04:18:38.884 [INFO][3917] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:39.081703 env[1306]: 2025-11-01 04:18:38.884 [INFO][3917] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i9e8z.gb1.brightbox.com' Nov 1 04:18:39.081703 env[1306]: 2025-11-01 04:18:38.914 [INFO][3917] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:39.081703 env[1306]: 2025-11-01 04:18:38.961 [INFO][3917] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:39.081703 env[1306]: 2025-11-01 04:18:38.980 [INFO][3917] ipam/ipam.go 511: Trying affinity for 192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:39.081703 env[1306]: 2025-11-01 04:18:38.983 [INFO][3917] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:39.081703 env[1306]: 2025-11-01 04:18:38.990 [INFO][3917] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:39.081703 env[1306]: 2025-11-01 04:18:38.990 [INFO][3917] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.64/26 handle="k8s-pod-network.88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:39.081703 env[1306]: 2025-11-01 04:18:39.000 [INFO][3917] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed Nov 1 04:18:39.081703 env[1306]: 2025-11-01 04:18:39.016 [INFO][3917] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.64/26 handle="k8s-pod-network.88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:39.081703 env[1306]: 2025-11-01 04:18:39.027 [INFO][3917] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.69/26] block=192.168.47.64/26 handle="k8s-pod-network.88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:39.081703 env[1306]: 2025-11-01 04:18:39.027 [INFO][3917] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.69/26] handle="k8s-pod-network.88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:39.081703 env[1306]: 2025-11-01 04:18:39.027 [INFO][3917] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:39.081703 env[1306]: 2025-11-01 04:18:39.027 [INFO][3917] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.69/26] IPv6=[] ContainerID="88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed" HandleID="k8s-pod-network.88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed" Workload="srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0" Nov 1 04:18:39.091876 env[1306]: 2025-11-01 04:18:39.029 [INFO][3870] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed" Namespace="calico-system" Pod="goldmane-666569f655-7nddq" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"09a6446d-f1c6-40ae-8ffc-711e84b66ed9", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 18, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-7nddq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7c5fb42d33a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:39.091876 env[1306]: 2025-11-01 04:18:39.030 [INFO][3870] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.69/32] ContainerID="88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed" Namespace="calico-system" Pod="goldmane-666569f655-7nddq" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0" Nov 1 04:18:39.091876 env[1306]: 2025-11-01 04:18:39.030 [INFO][3870] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c5fb42d33a ContainerID="88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed" Namespace="calico-system" Pod="goldmane-666569f655-7nddq" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0" Nov 1 04:18:39.091876 env[1306]: 2025-11-01 04:18:39.039 [INFO][3870] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed" Namespace="calico-system" Pod="goldmane-666569f655-7nddq" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0" Nov 1 04:18:39.091876 env[1306]: 2025-11-01 04:18:39.044 [INFO][3870] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed" Namespace="calico-system" Pod="goldmane-666569f655-7nddq" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"09a6446d-f1c6-40ae-8ffc-711e84b66ed9", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 18, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed", Pod:"goldmane-666569f655-7nddq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7c5fb42d33a", MAC:"0a:25:31:f5:e9:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:39.091876 env[1306]: 2025-11-01 04:18:39.077 [INFO][3870] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed" Namespace="calico-system" Pod="goldmane-666569f655-7nddq" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0" Nov 1 04:18:39.112202 systemd[1]: run-netns-cni\x2d1c2b9bfe\x2d86e6\x2def4b\x2d701b\x2d77adbfb2431d.mount: Deactivated successfully. Nov 1 04:18:39.112359 systemd[1]: run-netns-cni\x2d4709ac7c\x2d4dea\x2d8f8d\x2d7952\x2de7f9ae729ec6.mount: Deactivated successfully. Nov 1 04:18:39.112450 systemd[1]: run-netns-cni\x2d4eb70a5d\x2d9464\x2da957\x2d3fec\x2db92871a3c7d4.mount: Deactivated successfully. Nov 1 04:18:39.135711 systemd[1]: run-containerd-runc-k8s.io-fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60-runc.A9LmHw.mount: Deactivated successfully. Nov 1 04:18:39.148173 env[1306]: time="2025-11-01T04:18:39.148102717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 04:18:39.148341 env[1306]: time="2025-11-01T04:18:39.148181273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 04:18:39.148341 env[1306]: time="2025-11-01T04:18:39.148205192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 04:18:39.148413 env[1306]: time="2025-11-01T04:18:39.148374752Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed pid=4040 runtime=io.containerd.runc.v2 Nov 1 04:18:39.221346 kubelet[2178]: E1101 04:18:39.215545 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585f46d5f6-8lv48" podUID="10fb6fc1-31fc-4d32-a2e8-032e174f09df" Nov 1 04:18:39.221346 kubelet[2178]: E1101 04:18:39.216614 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4ccfj" podUID="f7f98be6-9d36-4c44-bedf-cd179c76bbfe" Nov 1 04:18:39.221656 kubelet[2178]: I1101 04:18:39.221383 2178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jpfzq" podStartSLOduration=43.221359949000004 podStartE2EDuration="43.221359949s" podCreationTimestamp="2025-11-01 04:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 04:18:39.220746276 +0000 UTC m=+46.681648977" watchObservedRunningTime="2025-11-01 04:18:39.221359949 +0000 UTC m=+46.682262667" Nov 1 04:18:39.315000 audit[4087]: NETFILTER_CFG table=filter:103 family=2 entries=17 op=nft_register_rule pid=4087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:39.315000 audit[4087]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc8e668520 a2=0 a3=7ffc8e66850c items=0 ppid=2289 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.315000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:39.318535 systemd-networkd[1086]: calia6a311739f5: Gained IPv6LL Nov 1 04:18:39.323000 audit[4087]: NETFILTER_CFG table=nat:104 family=2 entries=35 op=nft_register_chain pid=4087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:39.323000 audit[4087]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc8e668520 a2=0 a3=7ffc8e66850c items=0 ppid=2289 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.323000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:39.353714 env[1306]: time="2025-11-01T04:18:39.345292417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cvw4c,Uid:f6255d76-4d91-405c-b114-1f4e921c4b8b,Namespace:kube-system,Attempt:1,} returns sandbox id \"fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60\"" Nov 1 04:18:39.353714 env[1306]: time="2025-11-01T04:18:39.350365743Z" level=info msg="CreateContainer within sandbox \"fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 04:18:39.362614 env[1306]: time="2025-11-01T04:18:39.362124883Z" level=info msg="CreateContainer within sandbox \"fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f4240a15a1db8cca551310996144147f3d44fe8837b903effeef15b7f1fad28d\"" Nov 1 04:18:39.363488 env[1306]: time="2025-11-01T04:18:39.363446007Z" level=info msg="StartContainer for \"f4240a15a1db8cca551310996144147f3d44fe8837b903effeef15b7f1fad28d\"" Nov 1 04:18:39.382000 audit[4098]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=4098 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:39.382000 audit[4098]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcf1b997a0 a2=0 a3=7ffcf1b9978c items=0 ppid=2289 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.382000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:39.387000 audit[4098]: NETFILTER_CFG table=nat:106 family=2 entries=20 op=nft_register_rule pid=4098 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:39.387000 audit[4098]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcf1b997a0 a2=0 a3=7ffcf1b9978c items=0 ppid=2289 pid=4098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.387000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:39.447517 systemd-networkd[1086]: calibf83a63cd89: Gained IPv6LL Nov 1 04:18:39.483384 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid07a9cbe3af: link becomes ready Nov 1 04:18:39.480742 systemd-networkd[1086]: calid07a9cbe3af: Link UP Nov 1 04:18:39.480930 systemd-networkd[1086]: calid07a9cbe3af: Gained carrier Nov 1 04:18:39.507085 env[1306]: 2025-11-01 04:18:39.283 [INFO][4012] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0 calico-apiserver-6dbfc7dd7f- calico-apiserver cae162c8-7a97-49e7-94b4-82cbdda1df19 931 0 2025-11-01 04:18:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dbfc7dd7f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-i9e8z.gb1.brightbox.com calico-apiserver-6dbfc7dd7f-64jld eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid07a9cbe3af [] [] }} ContainerID="fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705" Namespace="calico-apiserver" Pod="calico-apiserver-6dbfc7dd7f-64jld" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-" Nov 1 04:18:39.507085 env[1306]: 2025-11-01 04:18:39.283 [INFO][4012] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705" Namespace="calico-apiserver" Pod="calico-apiserver-6dbfc7dd7f-64jld" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0" Nov 1 04:18:39.507085 env[1306]: 2025-11-01 04:18:39.379 [INFO][4083] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705" HandleID="k8s-pod-network.fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0" Nov 1 04:18:39.507085 env[1306]: 2025-11-01 04:18:39.380 [INFO][4083] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705" HandleID="k8s-pod-network.fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ccfe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-i9e8z.gb1.brightbox.com", "pod":"calico-apiserver-6dbfc7dd7f-64jld", "timestamp":"2025-11-01 04:18:39.379969441 +0000 UTC"}, Hostname:"srv-i9e8z.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 04:18:39.507085 env[1306]: 2025-11-01 04:18:39.380 [INFO][4083] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:39.507085 env[1306]: 2025-11-01 04:18:39.380 [INFO][4083] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:39.507085 env[1306]: 2025-11-01 04:18:39.380 [INFO][4083] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i9e8z.gb1.brightbox.com' Nov 1 04:18:39.507085 env[1306]: 2025-11-01 04:18:39.393 [INFO][4083] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:39.507085 env[1306]: 2025-11-01 04:18:39.407 [INFO][4083] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:39.507085 env[1306]: 2025-11-01 04:18:39.426 [INFO][4083] ipam/ipam.go 511: Trying affinity for 192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:39.507085 env[1306]: 2025-11-01 04:18:39.428 [INFO][4083] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:39.507085 env[1306]: 2025-11-01 04:18:39.431 [INFO][4083] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:39.507085 env[1306]: 2025-11-01 04:18:39.431 [INFO][4083] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.64/26 handle="k8s-pod-network.fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:39.507085 env[1306]: 2025-11-01 04:18:39.432 [INFO][4083] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705 Nov 1 04:18:39.507085 env[1306]: 2025-11-01 04:18:39.437 [INFO][4083] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.64/26 handle="k8s-pod-network.fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:39.507085 env[1306]: 2025-11-01 04:18:39.453 [INFO][4083] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.70/26] block=192.168.47.64/26 handle="k8s-pod-network.fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:39.507085 env[1306]: 2025-11-01 04:18:39.453 [INFO][4083] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.70/26] handle="k8s-pod-network.fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:39.507085 env[1306]: 2025-11-01 04:18:39.454 [INFO][4083] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:39.507085 env[1306]: 2025-11-01 04:18:39.454 [INFO][4083] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.70/26] IPv6=[] ContainerID="fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705" HandleID="k8s-pod-network.fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0" Nov 1 04:18:39.507860 env[1306]: 2025-11-01 04:18:39.465 [INFO][4012] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705" Namespace="calico-apiserver" Pod="calico-apiserver-6dbfc7dd7f-64jld" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0", GenerateName:"calico-apiserver-6dbfc7dd7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"cae162c8-7a97-49e7-94b4-82cbdda1df19", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 18, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dbfc7dd7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6dbfc7dd7f-64jld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid07a9cbe3af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:39.507860 env[1306]: 2025-11-01 04:18:39.465 [INFO][4012] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.70/32] ContainerID="fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705" Namespace="calico-apiserver" Pod="calico-apiserver-6dbfc7dd7f-64jld" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0" Nov 1 04:18:39.507860 env[1306]: 2025-11-01 04:18:39.465 [INFO][4012] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid07a9cbe3af ContainerID="fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705" Namespace="calico-apiserver" Pod="calico-apiserver-6dbfc7dd7f-64jld" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0" Nov 1 04:18:39.507860 env[1306]: 2025-11-01 04:18:39.472 [INFO][4012] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705" Namespace="calico-apiserver" Pod="calico-apiserver-6dbfc7dd7f-64jld" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0" Nov 1 04:18:39.507860 env[1306]: 2025-11-01 04:18:39.494 [INFO][4012] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705" Namespace="calico-apiserver" Pod="calico-apiserver-6dbfc7dd7f-64jld" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0", GenerateName:"calico-apiserver-6dbfc7dd7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"cae162c8-7a97-49e7-94b4-82cbdda1df19", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 18, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dbfc7dd7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705", Pod:"calico-apiserver-6dbfc7dd7f-64jld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid07a9cbe3af", MAC:"76:17:f3:b5:06:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:39.507860 env[1306]: 2025-11-01 04:18:39.503 [INFO][4012] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705" Namespace="calico-apiserver" Pod="calico-apiserver-6dbfc7dd7f-64jld" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0" Nov 1 04:18:39.544000 audit[4149]: NETFILTER_CFG table=mangle:107 family=2 entries=16 op=nft_register_chain pid=4149 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 04:18:39.544000 audit[4149]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffea1e98d50 a2=0 a3=7ffea1e98d3c items=0 ppid=3499 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.544000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 04:18:39.557000 audit[4151]: NETFILTER_CFG table=nat:108 family=2 entries=15 op=nft_register_chain pid=4151 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 04:18:39.557000 audit[4151]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc816b1570 a2=0 a3=7ffc816b155c items=0 ppid=3499 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.557000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 04:18:39.562000 audit[4148]: NETFILTER_CFG table=raw:109 family=2 entries=21 op=nft_register_chain pid=4148 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 04:18:39.562000 audit[4148]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd96bcfd20 a2=0 a3=7ffd96bcfd0c items=0 ppid=3499 pid=4148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.562000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 04:18:39.592084 env[1306]: time="2025-11-01T04:18:39.590179630Z" level=info msg="StartContainer for \"f4240a15a1db8cca551310996144147f3d44fe8837b903effeef15b7f1fad28d\" returns successfully" Nov 1 04:18:39.592386 env[1306]: time="2025-11-01T04:18:39.591615984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 04:18:39.592386 env[1306]: time="2025-11-01T04:18:39.591654364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 04:18:39.592386 env[1306]: time="2025-11-01T04:18:39.591666256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 04:18:39.592386 env[1306]: time="2025-11-01T04:18:39.591830571Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705 pid=4170 runtime=io.containerd.runc.v2 Nov 1 04:18:39.631929 env[1306]: time="2025-11-01T04:18:39.631868509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7nddq,Uid:09a6446d-f1c6-40ae-8ffc-711e84b66ed9,Namespace:calico-system,Attempt:1,} returns sandbox id \"88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed\"" Nov 1 04:18:39.633719 env[1306]: time="2025-11-01T04:18:39.633693006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 04:18:39.625000 audit[4153]: NETFILTER_CFG table=filter:110 family=2 entries=156 op=nft_register_chain pid=4153 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 04:18:39.625000 audit[4153]: SYSCALL arch=c000003e syscall=46 success=yes exit=89444 a0=3 a1=7ffc09e02100 a2=0 a3=7ffc09e020ec items=0 ppid=3499 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.625000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 04:18:39.713000 audit[4218]: NETFILTER_CFG table=filter:111 family=2 entries=118 op=nft_register_chain pid=4218 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 04:18:39.713000 audit[4218]: SYSCALL arch=c000003e syscall=46 success=yes exit=67932 a0=3 a1=7ffc3f391660 a2=0 a3=7ffc3f39164c items=0 ppid=3499 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:39.713000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 04:18:39.725197 env[1306]: time="2025-11-01T04:18:39.725154262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dbfc7dd7f-64jld,Uid:cae162c8-7a97-49e7-94b4-82cbdda1df19,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705\"" Nov 1 04:18:39.838083 env[1306]: time="2025-11-01T04:18:39.837998392Z" level=info msg="StopPodSandbox for \"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a\"" Nov 1 04:18:39.938597 env[1306]: time="2025-11-01T04:18:39.938529798Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:18:39.939281 env[1306]: time="2025-11-01T04:18:39.939219347Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 04:18:39.942737 kubelet[2178]: E1101 04:18:39.939662 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 04:18:39.942737 kubelet[2178]: E1101 04:18:39.939763 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 04:18:39.942737 kubelet[2178]: E1101 04:18:39.940283 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hz7s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7nddq_calico-system(09a6446d-f1c6-40ae-8ffc-711e84b66ed9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 04:18:39.942737 kubelet[2178]: E1101 04:18:39.942601 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7nddq" podUID="09a6446d-f1c6-40ae-8ffc-711e84b66ed9" Nov 1 04:18:39.946247 env[1306]: time="2025-11-01T04:18:39.946219963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 04:18:39.965241 env[1306]: 2025-11-01 04:18:39.916 [INFO][4237] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" Nov 1 04:18:39.965241 env[1306]: 2025-11-01 04:18:39.916 [INFO][4237] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" iface="eth0" netns="/var/run/netns/cni-c3c796e9-5e02-c784-42b2-7eed8ff0aca1" Nov 1 04:18:39.965241 env[1306]: 2025-11-01 04:18:39.917 [INFO][4237] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" iface="eth0" netns="/var/run/netns/cni-c3c796e9-5e02-c784-42b2-7eed8ff0aca1" Nov 1 04:18:39.965241 env[1306]: 2025-11-01 04:18:39.918 [INFO][4237] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" iface="eth0" netns="/var/run/netns/cni-c3c796e9-5e02-c784-42b2-7eed8ff0aca1" Nov 1 04:18:39.965241 env[1306]: 2025-11-01 04:18:39.919 [INFO][4237] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" Nov 1 04:18:39.965241 env[1306]: 2025-11-01 04:18:39.919 [INFO][4237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" Nov 1 04:18:39.965241 env[1306]: 2025-11-01 04:18:39.952 [INFO][4244] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" HandleID="k8s-pod-network.acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0" Nov 1 04:18:39.965241 env[1306]: 2025-11-01 04:18:39.953 [INFO][4244] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:39.965241 env[1306]: 2025-11-01 04:18:39.953 [INFO][4244] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:39.965241 env[1306]: 2025-11-01 04:18:39.959 [WARNING][4244] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" HandleID="k8s-pod-network.acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0" Nov 1 04:18:39.965241 env[1306]: 2025-11-01 04:18:39.959 [INFO][4244] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" HandleID="k8s-pod-network.acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0" Nov 1 04:18:39.965241 env[1306]: 2025-11-01 04:18:39.961 [INFO][4244] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:39.965241 env[1306]: 2025-11-01 04:18:39.963 [INFO][4237] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" Nov 1 04:18:39.965830 env[1306]: time="2025-11-01T04:18:39.965393400Z" level=info msg="TearDown network for sandbox \"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a\" successfully" Nov 1 04:18:39.965830 env[1306]: time="2025-11-01T04:18:39.965423711Z" level=info msg="StopPodSandbox for \"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a\" returns successfully" Nov 1 04:18:39.966671 env[1306]: time="2025-11-01T04:18:39.966641131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dbfc7dd7f-w5zxm,Uid:e7537f60-17f7-4f3b-b511-49610ae00add,Namespace:calico-apiserver,Attempt:1,}" Nov 1 04:18:40.113837 systemd[1]: run-netns-cni\x2dc3c796e9\x2d5e02\x2dc784\x2d42b2\x2d7eed8ff0aca1.mount: Deactivated successfully. Nov 1 04:18:40.142823 systemd-networkd[1086]: cali4dab77b6fc6: Link UP Nov 1 04:18:40.152644 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 04:18:40.158055 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4dab77b6fc6: link becomes ready Nov 1 04:18:40.159040 systemd-networkd[1086]: cali4dab77b6fc6: Gained carrier Nov 1 04:18:40.183292 env[1306]: 2025-11-01 04:18:40.025 [INFO][4250] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0 calico-apiserver-6dbfc7dd7f- calico-apiserver e7537f60-17f7-4f3b-b511-49610ae00add 967 0 2025-11-01 04:18:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6dbfc7dd7f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-i9e8z.gb1.brightbox.com calico-apiserver-6dbfc7dd7f-w5zxm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4dab77b6fc6 [] [] }} ContainerID="db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690" Namespace="calico-apiserver" Pod="calico-apiserver-6dbfc7dd7f-w5zxm" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-" Nov 1 04:18:40.183292 env[1306]: 2025-11-01 04:18:40.025 [INFO][4250] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690" Namespace="calico-apiserver" Pod="calico-apiserver-6dbfc7dd7f-w5zxm" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0" Nov 1 04:18:40.183292 env[1306]: 2025-11-01 04:18:40.057 [INFO][4263] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690" HandleID="k8s-pod-network.db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0" Nov 1 04:18:40.183292 env[1306]: 2025-11-01 04:18:40.057 [INFO][4263] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690" HandleID="k8s-pod-network.db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-i9e8z.gb1.brightbox.com", "pod":"calico-apiserver-6dbfc7dd7f-w5zxm", "timestamp":"2025-11-01 04:18:40.057103863 +0000 UTC"}, Hostname:"srv-i9e8z.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 04:18:40.183292 env[1306]: 2025-11-01 04:18:40.057 [INFO][4263] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:40.183292 env[1306]: 2025-11-01 04:18:40.057 [INFO][4263] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:40.183292 env[1306]: 2025-11-01 04:18:40.057 [INFO][4263] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i9e8z.gb1.brightbox.com' Nov 1 04:18:40.183292 env[1306]: 2025-11-01 04:18:40.067 [INFO][4263] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:40.183292 env[1306]: 2025-11-01 04:18:40.073 [INFO][4263] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:40.183292 env[1306]: 2025-11-01 04:18:40.096 [INFO][4263] ipam/ipam.go 511: Trying affinity for 192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:40.183292 env[1306]: 2025-11-01 04:18:40.099 [INFO][4263] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:40.183292 env[1306]: 2025-11-01 04:18:40.103 [INFO][4263] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:40.183292 env[1306]: 2025-11-01 04:18:40.103 [INFO][4263] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.64/26 handle="k8s-pod-network.db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:40.183292 env[1306]: 2025-11-01 04:18:40.105 [INFO][4263] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690 Nov 1 04:18:40.183292 env[1306]: 2025-11-01 04:18:40.114 [INFO][4263] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.64/26 handle="k8s-pod-network.db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:40.183292 env[1306]: 2025-11-01 04:18:40.123 [INFO][4263] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.71/26] block=192.168.47.64/26 handle="k8s-pod-network.db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:40.183292 env[1306]: 2025-11-01 04:18:40.123 [INFO][4263] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.71/26] handle="k8s-pod-network.db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:40.183292 env[1306]: 2025-11-01 04:18:40.123 [INFO][4263] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:40.183292 env[1306]: 2025-11-01 04:18:40.123 [INFO][4263] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.71/26] IPv6=[] ContainerID="db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690" HandleID="k8s-pod-network.db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0" Nov 1 04:18:40.184417 env[1306]: 2025-11-01 04:18:40.126 [INFO][4250] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690" Namespace="calico-apiserver" Pod="calico-apiserver-6dbfc7dd7f-w5zxm" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0", GenerateName:"calico-apiserver-6dbfc7dd7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7537f60-17f7-4f3b-b511-49610ae00add", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 18, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dbfc7dd7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6dbfc7dd7f-w5zxm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4dab77b6fc6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:40.184417 env[1306]: 2025-11-01 04:18:40.126 [INFO][4250] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.71/32] ContainerID="db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690" Namespace="calico-apiserver" Pod="calico-apiserver-6dbfc7dd7f-w5zxm" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0" Nov 1 04:18:40.184417 env[1306]: 2025-11-01 04:18:40.126 [INFO][4250] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4dab77b6fc6 ContainerID="db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690" Namespace="calico-apiserver" Pod="calico-apiserver-6dbfc7dd7f-w5zxm" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0" Nov 1 04:18:40.184417 env[1306]: 2025-11-01 04:18:40.160 [INFO][4250] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690" Namespace="calico-apiserver" Pod="calico-apiserver-6dbfc7dd7f-w5zxm" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0" Nov 1 04:18:40.184417 env[1306]: 2025-11-01 04:18:40.160 [INFO][4250] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690" Namespace="calico-apiserver" Pod="calico-apiserver-6dbfc7dd7f-w5zxm" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0", GenerateName:"calico-apiserver-6dbfc7dd7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7537f60-17f7-4f3b-b511-49610ae00add", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 18, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dbfc7dd7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690", Pod:"calico-apiserver-6dbfc7dd7f-w5zxm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4dab77b6fc6", MAC:"1a:7d:80:aa:e5:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:40.184417 env[1306]: 2025-11-01 04:18:40.176 [INFO][4250] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690" Namespace="calico-apiserver" Pod="calico-apiserver-6dbfc7dd7f-w5zxm" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0" Nov 1 04:18:40.196000 audit[4282]: NETFILTER_CFG table=filter:112 family=2 entries=57 op=nft_register_chain pid=4282 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 04:18:40.196000 audit[4282]: SYSCALL arch=c000003e syscall=46 success=yes exit=27828 a0=3 a1=7fffb5af7f60 a2=0 a3=7fffb5af7f4c items=0 ppid=3499 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:40.196000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 04:18:40.202584 env[1306]: time="2025-11-01T04:18:40.202468935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 04:18:40.202584 env[1306]: time="2025-11-01T04:18:40.202520536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 04:18:40.202584 env[1306]: time="2025-11-01T04:18:40.202532078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 04:18:40.202881 env[1306]: time="2025-11-01T04:18:40.202831996Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690 pid=4285 runtime=io.containerd.runc.v2 Nov 1 04:18:40.225044 kubelet[2178]: E1101 04:18:40.225006 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7nddq" podUID="09a6446d-f1c6-40ae-8ffc-711e84b66ed9" Nov 1 04:18:40.228493 kubelet[2178]: E1101 04:18:40.228457 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4ccfj" podUID="f7f98be6-9d36-4c44-bedf-cd179c76bbfe" Nov 1 04:18:40.248168 kubelet[2178]: I1101 04:18:40.248119 2178 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cvw4c" podStartSLOduration=44.248090477 podStartE2EDuration="44.248090477s" podCreationTimestamp="2025-11-01 04:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 04:18:40.24133362 +0000 UTC m=+47.702236318" watchObservedRunningTime="2025-11-01 04:18:40.248090477 +0000 UTC m=+47.708993198" Nov 1 04:18:40.266841 env[1306]: time="2025-11-01T04:18:40.266794546Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:18:40.291420 systemd[1]: run-containerd-runc-k8s.io-db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690-runc.Mrf11g.mount: Deactivated successfully. Nov 1 04:18:40.294773 env[1306]: time="2025-11-01T04:18:40.294707369Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 04:18:40.295554 kubelet[2178]: E1101 04:18:40.295516 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 04:18:40.295742 kubelet[2178]: E1101 04:18:40.295724 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 04:18:40.296020 kubelet[2178]: E1101 04:18:40.295977 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prpcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6dbfc7dd7f-64jld_calico-apiserver(cae162c8-7a97-49e7-94b4-82cbdda1df19): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 04:18:40.302697 kubelet[2178]: E1101 04:18:40.302662 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-64jld" podUID="cae162c8-7a97-49e7-94b4-82cbdda1df19" Nov 1 04:18:40.349000 audit[4314]: NETFILTER_CFG table=filter:113 family=2 entries=14 op=nft_register_rule pid=4314 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:40.349000 audit[4314]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc89da9770 a2=0 a3=7ffc89da975c items=0 ppid=2289 pid=4314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:40.349000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:40.361000 audit[4314]: NETFILTER_CFG table=nat:114 family=2 entries=56 op=nft_register_chain pid=4314 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:40.361000 audit[4314]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc89da9770 a2=0 a3=7ffc89da975c items=0 ppid=2289 pid=4314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:40.361000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:40.405186 env[1306]: time="2025-11-01T04:18:40.405143162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6dbfc7dd7f-w5zxm,Uid:e7537f60-17f7-4f3b-b511-49610ae00add,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690\"" Nov 1 04:18:40.409669 env[1306]: time="2025-11-01T04:18:40.409621812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 04:18:40.534564 systemd-networkd[1086]: cali86caf317204: Gained IPv6LL Nov 1 04:18:40.662923 systemd-networkd[1086]: vxlan.calico: Gained IPv6LL Nov 1 04:18:40.715377 env[1306]: time="2025-11-01T04:18:40.715080866Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:18:40.717559 env[1306]: time="2025-11-01T04:18:40.717403337Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 04:18:40.718938 kubelet[2178]: E1101 04:18:40.718775 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 04:18:40.719315 kubelet[2178]: E1101 04:18:40.718953 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 04:18:40.719864 kubelet[2178]: E1101 04:18:40.719518 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hqpx6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6dbfc7dd7f-w5zxm_calico-apiserver(e7537f60-17f7-4f3b-b511-49610ae00add): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 04:18:40.721087 kubelet[2178]: E1101 04:18:40.721021 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-w5zxm" podUID="e7537f60-17f7-4f3b-b511-49610ae00add" Nov 1 04:18:40.790890 systemd-networkd[1086]: cali7c5fb42d33a: Gained IPv6LL Nov 1 04:18:40.791658 systemd-networkd[1086]: calid07a9cbe3af: Gained IPv6LL Nov 1 04:18:41.207886 kubelet[2178]: I1101 04:18:41.207686 2178 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 04:18:41.239357 kubelet[2178]: E1101 04:18:41.237975 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-w5zxm" podUID="e7537f60-17f7-4f3b-b511-49610ae00add" Nov 1 04:18:41.239357 kubelet[2178]: E1101 04:18:41.238372 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-64jld" podUID="cae162c8-7a97-49e7-94b4-82cbdda1df19" Nov 1 04:18:41.239357 kubelet[2178]: E1101 04:18:41.238441 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7nddq" podUID="09a6446d-f1c6-40ae-8ffc-711e84b66ed9" Nov 1 04:18:41.275607 systemd[1]: run-containerd-runc-k8s.io-6a993f3df2ffe2d60d4e0201f210657e0876693cb90b43227daec1c05a01b439-runc.gmTdlT.mount: Deactivated successfully. Nov 1 04:18:41.327000 audit[4348]: NETFILTER_CFG table=filter:115 family=2 entries=14 op=nft_register_rule pid=4348 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:41.327000 audit[4348]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffed9d2030 a2=0 a3=7fffed9d201c items=0 ppid=2289 pid=4348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:41.327000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:41.333000 audit[4348]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=4348 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:18:41.333000 audit[4348]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffed9d2030 a2=0 a3=7fffed9d201c items=0 ppid=2289 pid=4348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:41.333000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:18:41.441932 systemd[1]: run-containerd-runc-k8s.io-6a993f3df2ffe2d60d4e0201f210657e0876693cb90b43227daec1c05a01b439-runc.uVnXjW.mount: Deactivated successfully. Nov 1 04:18:41.750550 systemd-networkd[1086]: cali4dab77b6fc6: Gained IPv6LL Nov 1 04:18:41.838812 env[1306]: time="2025-11-01T04:18:41.838690208Z" level=info msg="StopPodSandbox for \"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c\"" Nov 1 04:18:41.969617 env[1306]: 2025-11-01 04:18:41.917 [INFO][4381] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" Nov 1 04:18:41.969617 env[1306]: 2025-11-01 04:18:41.917 [INFO][4381] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" iface="eth0" netns="/var/run/netns/cni-84ff1e6b-4e99-f8ce-6e40-8072c237738d" Nov 1 04:18:41.969617 env[1306]: 2025-11-01 04:18:41.919 [INFO][4381] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" iface="eth0" netns="/var/run/netns/cni-84ff1e6b-4e99-f8ce-6e40-8072c237738d" Nov 1 04:18:41.969617 env[1306]: 2025-11-01 04:18:41.920 [INFO][4381] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" iface="eth0" netns="/var/run/netns/cni-84ff1e6b-4e99-f8ce-6e40-8072c237738d" Nov 1 04:18:41.969617 env[1306]: 2025-11-01 04:18:41.920 [INFO][4381] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" Nov 1 04:18:41.969617 env[1306]: 2025-11-01 04:18:41.920 [INFO][4381] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" Nov 1 04:18:41.969617 env[1306]: 2025-11-01 04:18:41.948 [INFO][4388] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" HandleID="k8s-pod-network.042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0" Nov 1 04:18:41.969617 env[1306]: 2025-11-01 04:18:41.948 [INFO][4388] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:41.969617 env[1306]: 2025-11-01 04:18:41.948 [INFO][4388] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:41.969617 env[1306]: 2025-11-01 04:18:41.957 [WARNING][4388] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" HandleID="k8s-pod-network.042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0" Nov 1 04:18:41.969617 env[1306]: 2025-11-01 04:18:41.957 [INFO][4388] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" HandleID="k8s-pod-network.042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0" Nov 1 04:18:41.969617 env[1306]: 2025-11-01 04:18:41.960 [INFO][4388] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:41.969617 env[1306]: 2025-11-01 04:18:41.963 [INFO][4381] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" Nov 1 04:18:41.969375 systemd[1]: run-netns-cni\x2d84ff1e6b\x2d4e99\x2df8ce\x2d6e40\x2d8072c237738d.mount: Deactivated successfully. Nov 1 04:18:41.971373 env[1306]: time="2025-11-01T04:18:41.970424933Z" level=info msg="TearDown network for sandbox \"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c\" successfully" Nov 1 04:18:41.971373 env[1306]: time="2025-11-01T04:18:41.970465860Z" level=info msg="StopPodSandbox for \"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c\" returns successfully" Nov 1 04:18:41.971705 env[1306]: time="2025-11-01T04:18:41.971664525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5857878c7b-k98b2,Uid:875389eb-2978-4aa4-ad6e-7b619ce206e3,Namespace:calico-system,Attempt:1,}" Nov 1 04:18:42.138664 systemd-networkd[1086]: cali2449e6566b4: Link UP Nov 1 04:18:42.140450 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 04:18:42.140539 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2449e6566b4: link becomes ready Nov 1 04:18:42.142386 systemd-networkd[1086]: cali2449e6566b4: Gained carrier Nov 1 04:18:42.181709 env[1306]: 2025-11-01 04:18:42.026 [INFO][4395] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0 calico-kube-controllers-5857878c7b- calico-system 875389eb-2978-4aa4-ad6e-7b619ce206e3 1017 0 2025-11-01 04:18:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5857878c7b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-i9e8z.gb1.brightbox.com calico-kube-controllers-5857878c7b-k98b2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2449e6566b4 [] [] }} ContainerID="c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19" Namespace="calico-system" Pod="calico-kube-controllers-5857878c7b-k98b2" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-" Nov 1 04:18:42.181709 env[1306]: 2025-11-01 04:18:42.028 [INFO][4395] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19" Namespace="calico-system" Pod="calico-kube-controllers-5857878c7b-k98b2" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0" Nov 1 04:18:42.181709 env[1306]: 2025-11-01 04:18:42.075 [INFO][4407] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19" HandleID="k8s-pod-network.c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0" Nov 1 04:18:42.181709 env[1306]: 2025-11-01 04:18:42.075 [INFO][4407] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19" HandleID="k8s-pod-network.c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd2a0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-i9e8z.gb1.brightbox.com", "pod":"calico-kube-controllers-5857878c7b-k98b2", "timestamp":"2025-11-01 04:18:42.075353956 +0000 UTC"}, Hostname:"srv-i9e8z.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 04:18:42.181709 env[1306]: 2025-11-01 04:18:42.075 [INFO][4407] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:42.181709 env[1306]: 2025-11-01 04:18:42.075 [INFO][4407] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:42.181709 env[1306]: 2025-11-01 04:18:42.075 [INFO][4407] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i9e8z.gb1.brightbox.com' Nov 1 04:18:42.181709 env[1306]: 2025-11-01 04:18:42.090 [INFO][4407] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:42.181709 env[1306]: 2025-11-01 04:18:42.097 [INFO][4407] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:42.181709 env[1306]: 2025-11-01 04:18:42.102 [INFO][4407] ipam/ipam.go 511: Trying affinity for 192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:42.181709 env[1306]: 2025-11-01 04:18:42.104 [INFO][4407] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:42.181709 env[1306]: 2025-11-01 04:18:42.109 [INFO][4407] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.64/26 host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:42.181709 env[1306]: 2025-11-01 04:18:42.109 [INFO][4407] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.64/26 handle="k8s-pod-network.c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:42.181709 env[1306]: 2025-11-01 04:18:42.112 [INFO][4407] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19 Nov 1 04:18:42.181709 env[1306]: 2025-11-01 04:18:42.117 [INFO][4407] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.64/26 handle="k8s-pod-network.c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:42.181709 env[1306]: 2025-11-01 04:18:42.125 [INFO][4407] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.72/26] block=192.168.47.64/26 handle="k8s-pod-network.c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:42.181709 env[1306]: 2025-11-01 04:18:42.125 [INFO][4407] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.72/26] handle="k8s-pod-network.c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19" host="srv-i9e8z.gb1.brightbox.com" Nov 1 04:18:42.181709 env[1306]: 2025-11-01 04:18:42.125 [INFO][4407] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:42.181709 env[1306]: 2025-11-01 04:18:42.125 [INFO][4407] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.72/26] IPv6=[] ContainerID="c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19" HandleID="k8s-pod-network.c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0" Nov 1 04:18:42.182535 env[1306]: 2025-11-01 04:18:42.128 [INFO][4395] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19" Namespace="calico-system" Pod="calico-kube-controllers-5857878c7b-k98b2" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0", GenerateName:"calico-kube-controllers-5857878c7b-", Namespace:"calico-system", SelfLink:"", UID:"875389eb-2978-4aa4-ad6e-7b619ce206e3", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5857878c7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-5857878c7b-k98b2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2449e6566b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:42.182535 env[1306]: 2025-11-01 04:18:42.128 [INFO][4395] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.72/32] ContainerID="c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19" Namespace="calico-system" Pod="calico-kube-controllers-5857878c7b-k98b2" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0" Nov 1 04:18:42.182535 env[1306]: 2025-11-01 04:18:42.128 [INFO][4395] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2449e6566b4 ContainerID="c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19" Namespace="calico-system" Pod="calico-kube-controllers-5857878c7b-k98b2" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0" Nov 1 04:18:42.182535 env[1306]: 2025-11-01 04:18:42.141 [INFO][4395] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19" Namespace="calico-system" Pod="calico-kube-controllers-5857878c7b-k98b2" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0" Nov 1 04:18:42.182535 env[1306]: 2025-11-01 04:18:42.143 [INFO][4395] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19" Namespace="calico-system" Pod="calico-kube-controllers-5857878c7b-k98b2" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0", GenerateName:"calico-kube-controllers-5857878c7b-", Namespace:"calico-system", SelfLink:"", UID:"875389eb-2978-4aa4-ad6e-7b619ce206e3", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5857878c7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19", Pod:"calico-kube-controllers-5857878c7b-k98b2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2449e6566b4", MAC:"9e:2d:11:d1:bf:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:42.182535 env[1306]: 2025-11-01 04:18:42.177 [INFO][4395] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19" Namespace="calico-system" Pod="calico-kube-controllers-5857878c7b-k98b2" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0" Nov 1 04:18:42.200110 env[1306]: time="2025-11-01T04:18:42.199974718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 04:18:42.200110 env[1306]: time="2025-11-01T04:18:42.200027697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 04:18:42.200110 env[1306]: time="2025-11-01T04:18:42.200038828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 04:18:42.204062 env[1306]: time="2025-11-01T04:18:42.200499647Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19 pid=4430 runtime=io.containerd.runc.v2 Nov 1 04:18:42.209000 audit[4440]: NETFILTER_CFG table=filter:117 family=2 entries=60 op=nft_register_chain pid=4440 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 04:18:42.212253 kernel: kauditd_printk_skb: 577 callbacks suppressed Nov 1 04:18:42.212561 kernel: audit: type=1325 audit(1761970722.209:424): table=filter:117 family=2 entries=60 op=nft_register_chain pid=4440 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Nov 1 04:18:42.209000 audit[4440]: SYSCALL arch=c000003e syscall=46 success=yes exit=26704 a0=3 a1=7ffc43043360 a2=0 a3=7ffc4304334c items=0 ppid=3499 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:42.218805 kernel: audit: type=1300 audit(1761970722.209:424): arch=c000003e syscall=46 success=yes exit=26704 a0=3 a1=7ffc43043360 a2=0 a3=7ffc4304334c items=0 ppid=3499 pid=4440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:18:42.209000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 04:18:42.221602 kernel: audit: type=1327 audit(1761970722.209:424): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Nov 1 04:18:42.242686 kubelet[2178]: E1101 04:18:42.241379 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-w5zxm" podUID="e7537f60-17f7-4f3b-b511-49610ae00add" Nov 1 04:18:42.299483 env[1306]: time="2025-11-01T04:18:42.299437070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5857878c7b-k98b2,Uid:875389eb-2978-4aa4-ad6e-7b619ce206e3,Namespace:calico-system,Attempt:1,} returns sandbox id \"c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19\"" Nov 1 04:18:42.304827 env[1306]: time="2025-11-01T04:18:42.304798242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 04:18:42.618535 env[1306]: time="2025-11-01T04:18:42.613587538Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:18:42.618535 env[1306]: time="2025-11-01T04:18:42.614943338Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 04:18:42.619433 kubelet[2178]: E1101 04:18:42.619316 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 04:18:42.619755 kubelet[2178]: E1101 04:18:42.619706 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 04:18:42.620673 kubelet[2178]: E1101 04:18:42.620516 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxmxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5857878c7b-k98b2_calico-system(875389eb-2978-4aa4-ad6e-7b619ce206e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 04:18:42.622967 kubelet[2178]: E1101 04:18:42.622923 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5857878c7b-k98b2" podUID="875389eb-2978-4aa4-ad6e-7b619ce206e3" Nov 1 04:18:43.243296 kubelet[2178]: E1101 04:18:43.243241 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5857878c7b-k98b2" podUID="875389eb-2978-4aa4-ad6e-7b619ce206e3" Nov 1 04:18:43.670490 systemd-networkd[1086]: cali2449e6566b4: Gained IPv6LL Nov 1 04:18:44.259417 kubelet[2178]: E1101 04:18:44.259372 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5857878c7b-k98b2" podUID="875389eb-2978-4aa4-ad6e-7b619ce206e3" Nov 1 04:18:51.839091 env[1306]: time="2025-11-01T04:18:51.838952369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 04:18:52.151782 env[1306]: time="2025-11-01T04:18:52.151669695Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:18:52.152415 env[1306]: time="2025-11-01T04:18:52.152308232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 04:18:52.152684 kubelet[2178]: E1101 04:18:52.152602 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 04:18:52.154293 kubelet[2178]: E1101 04:18:52.152681 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 04:18:52.154293 kubelet[2178]: E1101 04:18:52.152848 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fwflm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4ccfj_calico-system(f7f98be6-9d36-4c44-bedf-cd179c76bbfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 04:18:52.156433 env[1306]: time="2025-11-01T04:18:52.156027420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 04:18:52.464112 env[1306]: time="2025-11-01T04:18:52.463925611Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:18:52.465673 env[1306]: time="2025-11-01T04:18:52.465607553Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 04:18:52.466192 kubelet[2178]: E1101 04:18:52.466143 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 04:18:52.466401 kubelet[2178]: E1101 04:18:52.466369 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 04:18:52.466813 kubelet[2178]: E1101 04:18:52.466653 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fwflm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4ccfj_calico-system(f7f98be6-9d36-4c44-bedf-cd179c76bbfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 04:18:52.468418 kubelet[2178]: E1101 04:18:52.468354 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4ccfj" podUID="f7f98be6-9d36-4c44-bedf-cd179c76bbfe" Nov 1 04:18:52.800003 env[1306]: time="2025-11-01T04:18:52.799394528Z" level=info msg="StopPodSandbox for \"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416\"" Nov 1 04:18:52.839447 env[1306]: time="2025-11-01T04:18:52.839407150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 04:18:53.027458 env[1306]: 2025-11-01 04:18:52.929 [WARNING][4490] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f7f98be6-9d36-4c44-bedf-cd179c76bbfe", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe", Pod:"csi-node-driver-4ccfj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibf83a63cd89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:53.027458 env[1306]: 2025-11-01 04:18:52.929 [INFO][4490] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" Nov 1 04:18:53.027458 env[1306]: 2025-11-01 04:18:52.929 [INFO][4490] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" iface="eth0" netns="" Nov 1 04:18:53.027458 env[1306]: 2025-11-01 04:18:52.929 [INFO][4490] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" Nov 1 04:18:53.027458 env[1306]: 2025-11-01 04:18:52.929 [INFO][4490] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" Nov 1 04:18:53.027458 env[1306]: 2025-11-01 04:18:53.008 [INFO][4499] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" HandleID="k8s-pod-network.00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" Workload="srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0" Nov 1 04:18:53.027458 env[1306]: 2025-11-01 04:18:53.008 [INFO][4499] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:53.027458 env[1306]: 2025-11-01 04:18:53.008 [INFO][4499] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:53.027458 env[1306]: 2025-11-01 04:18:53.021 [WARNING][4499] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" HandleID="k8s-pod-network.00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" Workload="srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0" Nov 1 04:18:53.027458 env[1306]: 2025-11-01 04:18:53.021 [INFO][4499] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" HandleID="k8s-pod-network.00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" Workload="srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0" Nov 1 04:18:53.027458 env[1306]: 2025-11-01 04:18:53.022 [INFO][4499] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:53.027458 env[1306]: 2025-11-01 04:18:53.025 [INFO][4490] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" Nov 1 04:18:53.028563 env[1306]: time="2025-11-01T04:18:53.027484493Z" level=info msg="TearDown network for sandbox \"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416\" successfully" Nov 1 04:18:53.028563 env[1306]: time="2025-11-01T04:18:53.027516016Z" level=info msg="StopPodSandbox for \"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416\" returns successfully" Nov 1 04:18:53.028785 env[1306]: time="2025-11-01T04:18:53.028754428Z" level=info msg="RemovePodSandbox for \"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416\"" Nov 1 04:18:53.028914 env[1306]: time="2025-11-01T04:18:53.028868250Z" level=info msg="Forcibly stopping sandbox \"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416\"" Nov 1 04:18:53.162340 env[1306]: time="2025-11-01T04:18:53.160363521Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:18:53.163432 env[1306]: time="2025-11-01T04:18:53.163370632Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 04:18:53.164287 kubelet[2178]: E1101 04:18:53.163660 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 04:18:53.164287 kubelet[2178]: E1101 04:18:53.163720 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 04:18:53.164287 kubelet[2178]: E1101 04:18:53.163886 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hz7s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7nddq_calico-system(09a6446d-f1c6-40ae-8ffc-711e84b66ed9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 04:18:53.165418 kubelet[2178]: E1101 04:18:53.165356 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7nddq" podUID="09a6446d-f1c6-40ae-8ffc-711e84b66ed9" Nov 1 04:18:53.309029 env[1306]: 2025-11-01 04:18:53.126 [WARNING][4516] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f7f98be6-9d36-4c44-bedf-cd179c76bbfe", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"bee9d8db52df2886a23439a5970f164baff1c513663b18e4759754260393f4fe", Pod:"csi-node-driver-4ccfj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibf83a63cd89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:53.309029 env[1306]: 2025-11-01 04:18:53.126 [INFO][4516] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" Nov 1 04:18:53.309029 env[1306]: 2025-11-01 04:18:53.126 [INFO][4516] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" iface="eth0" netns="" Nov 1 04:18:53.309029 env[1306]: 2025-11-01 04:18:53.126 [INFO][4516] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" Nov 1 04:18:53.309029 env[1306]: 2025-11-01 04:18:53.126 [INFO][4516] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" Nov 1 04:18:53.309029 env[1306]: 2025-11-01 04:18:53.259 [INFO][4523] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" HandleID="k8s-pod-network.00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" Workload="srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0" Nov 1 04:18:53.309029 env[1306]: 2025-11-01 04:18:53.261 [INFO][4523] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:53.309029 env[1306]: 2025-11-01 04:18:53.261 [INFO][4523] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:53.309029 env[1306]: 2025-11-01 04:18:53.303 [WARNING][4523] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" HandleID="k8s-pod-network.00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" Workload="srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0" Nov 1 04:18:53.309029 env[1306]: 2025-11-01 04:18:53.303 [INFO][4523] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" HandleID="k8s-pod-network.00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" Workload="srv--i9e8z.gb1.brightbox.com-k8s-csi--node--driver--4ccfj-eth0" Nov 1 04:18:53.309029 env[1306]: 2025-11-01 04:18:53.305 [INFO][4523] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:53.309029 env[1306]: 2025-11-01 04:18:53.307 [INFO][4516] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416" Nov 1 04:18:53.309702 env[1306]: time="2025-11-01T04:18:53.309069024Z" level=info msg="TearDown network for sandbox \"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416\" successfully" Nov 1 04:18:53.321167 env[1306]: time="2025-11-01T04:18:53.321111323Z" level=info msg="RemovePodSandbox \"00a697d9d54af5c086e68b5f0402e35e9a4d51cba463af34aa8c4579a5280416\" returns successfully" Nov 1 04:18:53.321844 env[1306]: time="2025-11-01T04:18:53.321818458Z" level=info msg="StopPodSandbox for \"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c\"" Nov 1 04:18:53.471667 env[1306]: 2025-11-01 04:18:53.410 [WARNING][4537] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0", GenerateName:"calico-kube-controllers-5857878c7b-", Namespace:"calico-system", SelfLink:"", UID:"875389eb-2978-4aa4-ad6e-7b619ce206e3", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5857878c7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19", Pod:"calico-kube-controllers-5857878c7b-k98b2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2449e6566b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:53.471667 env[1306]: 2025-11-01 04:18:53.410 [INFO][4537] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" Nov 1 04:18:53.471667 env[1306]: 2025-11-01 04:18:53.410 [INFO][4537] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" iface="eth0" netns="" Nov 1 04:18:53.471667 env[1306]: 2025-11-01 04:18:53.410 [INFO][4537] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" Nov 1 04:18:53.471667 env[1306]: 2025-11-01 04:18:53.410 [INFO][4537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" Nov 1 04:18:53.471667 env[1306]: 2025-11-01 04:18:53.451 [INFO][4544] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" HandleID="k8s-pod-network.042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0" Nov 1 04:18:53.471667 env[1306]: 2025-11-01 04:18:53.452 [INFO][4544] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:53.471667 env[1306]: 2025-11-01 04:18:53.452 [INFO][4544] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:53.471667 env[1306]: 2025-11-01 04:18:53.459 [WARNING][4544] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" HandleID="k8s-pod-network.042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0" Nov 1 04:18:53.471667 env[1306]: 2025-11-01 04:18:53.459 [INFO][4544] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" HandleID="k8s-pod-network.042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0" Nov 1 04:18:53.471667 env[1306]: 2025-11-01 04:18:53.461 [INFO][4544] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:53.471667 env[1306]: 2025-11-01 04:18:53.464 [INFO][4537] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" Nov 1 04:18:53.476536 env[1306]: time="2025-11-01T04:18:53.471887373Z" level=info msg="TearDown network for sandbox \"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c\" successfully" Nov 1 04:18:53.476536 env[1306]: time="2025-11-01T04:18:53.471929379Z" level=info msg="StopPodSandbox for \"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c\" returns successfully" Nov 1 04:18:53.476536 env[1306]: time="2025-11-01T04:18:53.476135578Z" level=info msg="RemovePodSandbox for \"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c\"" Nov 1 04:18:53.476536 env[1306]: time="2025-11-01T04:18:53.476171197Z" level=info msg="Forcibly stopping sandbox \"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c\"" Nov 1 04:18:53.583366 env[1306]: 2025-11-01 04:18:53.524 [WARNING][4558] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0", GenerateName:"calico-kube-controllers-5857878c7b-", Namespace:"calico-system", SelfLink:"", UID:"875389eb-2978-4aa4-ad6e-7b619ce206e3", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5857878c7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"c69e7a05886b50d59b88e6a88ebb5b8ca92a7c51804d95b1bde7a4e665f3bd19", Pod:"calico-kube-controllers-5857878c7b-k98b2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2449e6566b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:53.583366 env[1306]: 2025-11-01 04:18:53.524 [INFO][4558] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" Nov 1 04:18:53.583366 env[1306]: 2025-11-01 04:18:53.524 [INFO][4558] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" iface="eth0" netns="" Nov 1 04:18:53.583366 env[1306]: 2025-11-01 04:18:53.524 [INFO][4558] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" Nov 1 04:18:53.583366 env[1306]: 2025-11-01 04:18:53.524 [INFO][4558] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" Nov 1 04:18:53.583366 env[1306]: 2025-11-01 04:18:53.571 [INFO][4566] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" HandleID="k8s-pod-network.042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0" Nov 1 04:18:53.583366 env[1306]: 2025-11-01 04:18:53.572 [INFO][4566] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:53.583366 env[1306]: 2025-11-01 04:18:53.572 [INFO][4566] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:53.583366 env[1306]: 2025-11-01 04:18:53.578 [WARNING][4566] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" HandleID="k8s-pod-network.042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0" Nov 1 04:18:53.583366 env[1306]: 2025-11-01 04:18:53.578 [INFO][4566] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" HandleID="k8s-pod-network.042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--kube--controllers--5857878c7b--k98b2-eth0" Nov 1 04:18:53.583366 env[1306]: 2025-11-01 04:18:53.579 [INFO][4566] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:53.583366 env[1306]: 2025-11-01 04:18:53.581 [INFO][4558] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c" Nov 1 04:18:53.584081 env[1306]: time="2025-11-01T04:18:53.584041245Z" level=info msg="TearDown network for sandbox \"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c\" successfully" Nov 1 04:18:53.587217 env[1306]: time="2025-11-01T04:18:53.587136181Z" level=info msg="RemovePodSandbox \"042fce9e6402cca9278e2d64895b287572deb7814bf5e4a11d1e8b1c149cc53c\" returns successfully" Nov 1 04:18:53.588401 env[1306]: time="2025-11-01T04:18:53.588307471Z" level=info msg="StopPodSandbox for \"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a\"" Nov 1 04:18:53.763849 env[1306]: 2025-11-01 04:18:53.657 [WARNING][4581] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0", GenerateName:"calico-apiserver-6dbfc7dd7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"cae162c8-7a97-49e7-94b4-82cbdda1df19", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 18, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dbfc7dd7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705", Pod:"calico-apiserver-6dbfc7dd7f-64jld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid07a9cbe3af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:53.763849 env[1306]: 2025-11-01 04:18:53.657 [INFO][4581] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" Nov 1 04:18:53.763849 env[1306]: 2025-11-01 04:18:53.657 [INFO][4581] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" iface="eth0" netns="" Nov 1 04:18:53.763849 env[1306]: 2025-11-01 04:18:53.657 [INFO][4581] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" Nov 1 04:18:53.763849 env[1306]: 2025-11-01 04:18:53.657 [INFO][4581] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" Nov 1 04:18:53.763849 env[1306]: 2025-11-01 04:18:53.738 [INFO][4588] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" HandleID="k8s-pod-network.c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0" Nov 1 04:18:53.763849 env[1306]: 2025-11-01 04:18:53.743 [INFO][4588] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:53.763849 env[1306]: 2025-11-01 04:18:53.743 [INFO][4588] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:53.763849 env[1306]: 2025-11-01 04:18:53.757 [WARNING][4588] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" HandleID="k8s-pod-network.c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0" Nov 1 04:18:53.763849 env[1306]: 2025-11-01 04:18:53.757 [INFO][4588] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" HandleID="k8s-pod-network.c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0" Nov 1 04:18:53.763849 env[1306]: 2025-11-01 04:18:53.758 [INFO][4588] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:53.763849 env[1306]: 2025-11-01 04:18:53.760 [INFO][4581] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" Nov 1 04:18:53.763849 env[1306]: time="2025-11-01T04:18:53.762378149Z" level=info msg="TearDown network for sandbox \"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a\" successfully" Nov 1 04:18:53.763849 env[1306]: time="2025-11-01T04:18:53.762413817Z" level=info msg="StopPodSandbox for \"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a\" returns successfully" Nov 1 04:18:53.765039 env[1306]: time="2025-11-01T04:18:53.764186226Z" level=info msg="RemovePodSandbox for \"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a\"" Nov 1 04:18:53.765039 env[1306]: time="2025-11-01T04:18:53.764228732Z" level=info msg="Forcibly stopping sandbox \"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a\"" Nov 1 04:18:53.847632 env[1306]: time="2025-11-01T04:18:53.847560421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 04:18:53.904717 env[1306]: 2025-11-01 04:18:53.827 [WARNING][4602] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0", GenerateName:"calico-apiserver-6dbfc7dd7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"cae162c8-7a97-49e7-94b4-82cbdda1df19", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 18, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dbfc7dd7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"fbe45d95506d309213883d00bec176398d7518228a656f9ed57ed693e6ebe705", Pod:"calico-apiserver-6dbfc7dd7f-64jld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid07a9cbe3af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:53.904717 env[1306]: 2025-11-01 04:18:53.827 [INFO][4602] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" Nov 1 04:18:53.904717 env[1306]: 2025-11-01 04:18:53.827 [INFO][4602] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" iface="eth0" netns="" Nov 1 04:18:53.904717 env[1306]: 2025-11-01 04:18:53.827 [INFO][4602] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" Nov 1 04:18:53.904717 env[1306]: 2025-11-01 04:18:53.827 [INFO][4602] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" Nov 1 04:18:53.904717 env[1306]: 2025-11-01 04:18:53.877 [INFO][4609] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" HandleID="k8s-pod-network.c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0" Nov 1 04:18:53.904717 env[1306]: 2025-11-01 04:18:53.878 [INFO][4609] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:53.904717 env[1306]: 2025-11-01 04:18:53.878 [INFO][4609] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:53.904717 env[1306]: 2025-11-01 04:18:53.885 [WARNING][4609] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" HandleID="k8s-pod-network.c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0" Nov 1 04:18:53.904717 env[1306]: 2025-11-01 04:18:53.885 [INFO][4609] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" HandleID="k8s-pod-network.c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--64jld-eth0" Nov 1 04:18:53.904717 env[1306]: 2025-11-01 04:18:53.887 [INFO][4609] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:53.904717 env[1306]: 2025-11-01 04:18:53.898 [INFO][4602] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a" Nov 1 04:18:53.905311 env[1306]: time="2025-11-01T04:18:53.904747121Z" level=info msg="TearDown network for sandbox \"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a\" successfully" Nov 1 04:18:53.908216 env[1306]: time="2025-11-01T04:18:53.908177399Z" level=info msg="RemovePodSandbox \"c464620f2fd535ded31e7b8741d8a3e41948743721f89703832b393ee45eba5a\" returns successfully" Nov 1 04:18:53.908787 env[1306]: time="2025-11-01T04:18:53.908762673Z" level=info msg="StopPodSandbox for \"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb\"" Nov 1 04:18:54.041161 env[1306]: 2025-11-01 04:18:53.984 [WARNING][4623] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f6255d76-4d91-405c-b114-1f4e921c4b8b", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 17, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60", Pod:"coredns-668d6bf9bc-cvw4c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86caf317204", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:54.041161 env[1306]: 2025-11-01 04:18:53.984 [INFO][4623] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" Nov 1 04:18:54.041161 env[1306]: 2025-11-01 04:18:53.984 [INFO][4623] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" iface="eth0" netns="" Nov 1 04:18:54.041161 env[1306]: 2025-11-01 04:18:53.984 [INFO][4623] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" Nov 1 04:18:54.041161 env[1306]: 2025-11-01 04:18:53.984 [INFO][4623] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" Nov 1 04:18:54.041161 env[1306]: 2025-11-01 04:18:54.023 [INFO][4630] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" HandleID="k8s-pod-network.eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0" Nov 1 04:18:54.041161 env[1306]: 2025-11-01 04:18:54.023 [INFO][4630] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:54.041161 env[1306]: 2025-11-01 04:18:54.024 [INFO][4630] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:54.041161 env[1306]: 2025-11-01 04:18:54.034 [WARNING][4630] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" HandleID="k8s-pod-network.eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0" Nov 1 04:18:54.041161 env[1306]: 2025-11-01 04:18:54.034 [INFO][4630] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" HandleID="k8s-pod-network.eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0" Nov 1 04:18:54.041161 env[1306]: 2025-11-01 04:18:54.037 [INFO][4630] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:54.041161 env[1306]: 2025-11-01 04:18:54.039 [INFO][4623] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" Nov 1 04:18:54.041161 env[1306]: time="2025-11-01T04:18:54.040976086Z" level=info msg="TearDown network for sandbox \"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb\" successfully" Nov 1 04:18:54.041161 env[1306]: time="2025-11-01T04:18:54.041008769Z" level=info msg="StopPodSandbox for \"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb\" returns successfully" Nov 1 04:18:54.046361 env[1306]: time="2025-11-01T04:18:54.043227732Z" level=info msg="RemovePodSandbox for \"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb\"" Nov 1 04:18:54.046361 env[1306]: time="2025-11-01T04:18:54.043279919Z" level=info msg="Forcibly stopping sandbox \"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb\"" Nov 1 04:18:54.166500 env[1306]: time="2025-11-01T04:18:54.166443791Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:18:54.188903 env[1306]: time="2025-11-01T04:18:54.188838938Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 04:18:54.190233 kubelet[2178]: E1101 04:18:54.189663 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 04:18:54.190233 kubelet[2178]: E1101 04:18:54.189723 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 04:18:54.190233 kubelet[2178]: E1101 04:18:54.189851 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b3e36ba512a94ba9aab826249dfe86b5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gzqrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-585f46d5f6-8lv48_calico-system(10fb6fc1-31fc-4d32-a2e8-032e174f09df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 04:18:54.197237 env[1306]: time="2025-11-01T04:18:54.197203334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 04:18:54.214940 env[1306]: 2025-11-01 04:18:54.116 [WARNING][4644] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f6255d76-4d91-405c-b114-1f4e921c4b8b", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 17, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"fa181867f33a76460842bd82d39107ad76642a377fba38cfa58509911860cf60", Pod:"coredns-668d6bf9bc-cvw4c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86caf317204", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:54.214940 env[1306]: 2025-11-01 04:18:54.116 [INFO][4644] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" Nov 1 04:18:54.214940 env[1306]: 2025-11-01 04:18:54.116 [INFO][4644] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" iface="eth0" netns="" Nov 1 04:18:54.214940 env[1306]: 2025-11-01 04:18:54.116 [INFO][4644] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" Nov 1 04:18:54.214940 env[1306]: 2025-11-01 04:18:54.116 [INFO][4644] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" Nov 1 04:18:54.214940 env[1306]: 2025-11-01 04:18:54.176 [INFO][4651] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" HandleID="k8s-pod-network.eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0" Nov 1 04:18:54.214940 env[1306]: 2025-11-01 04:18:54.176 [INFO][4651] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:54.214940 env[1306]: 2025-11-01 04:18:54.177 [INFO][4651] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:54.214940 env[1306]: 2025-11-01 04:18:54.198 [WARNING][4651] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" HandleID="k8s-pod-network.eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0" Nov 1 04:18:54.214940 env[1306]: 2025-11-01 04:18:54.198 [INFO][4651] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" HandleID="k8s-pod-network.eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cvw4c-eth0" Nov 1 04:18:54.214940 env[1306]: 2025-11-01 04:18:54.201 [INFO][4651] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:54.214940 env[1306]: 2025-11-01 04:18:54.206 [INFO][4644] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb" Nov 1 04:18:54.215997 env[1306]: time="2025-11-01T04:18:54.215419566Z" level=info msg="TearDown network for sandbox \"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb\" successfully" Nov 1 04:18:54.219689 env[1306]: time="2025-11-01T04:18:54.219659801Z" level=info msg="RemovePodSandbox \"eec1fbc5a26d0c3d0e4138d7690abf88b58d0408fc72ce3d4b6668790f45eceb\" returns successfully" Nov 1 04:18:54.220431 env[1306]: time="2025-11-01T04:18:54.220405011Z" level=info msg="StopPodSandbox for \"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a\"" Nov 1 04:18:54.406578 env[1306]: 2025-11-01 04:18:54.334 [WARNING][4665] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0", GenerateName:"calico-apiserver-6dbfc7dd7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7537f60-17f7-4f3b-b511-49610ae00add", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 18, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dbfc7dd7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690", Pod:"calico-apiserver-6dbfc7dd7f-w5zxm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4dab77b6fc6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:54.406578 env[1306]: 2025-11-01 04:18:54.335 [INFO][4665] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" Nov 1 04:18:54.406578 env[1306]: 2025-11-01 04:18:54.335 [INFO][4665] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" iface="eth0" netns="" Nov 1 04:18:54.406578 env[1306]: 2025-11-01 04:18:54.335 [INFO][4665] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" Nov 1 04:18:54.406578 env[1306]: 2025-11-01 04:18:54.335 [INFO][4665] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" Nov 1 04:18:54.406578 env[1306]: 2025-11-01 04:18:54.388 [INFO][4672] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" HandleID="k8s-pod-network.acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0" Nov 1 04:18:54.406578 env[1306]: 2025-11-01 04:18:54.388 [INFO][4672] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:54.406578 env[1306]: 2025-11-01 04:18:54.388 [INFO][4672] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:54.406578 env[1306]: 2025-11-01 04:18:54.398 [WARNING][4672] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" HandleID="k8s-pod-network.acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0" Nov 1 04:18:54.406578 env[1306]: 2025-11-01 04:18:54.398 [INFO][4672] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" HandleID="k8s-pod-network.acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0" Nov 1 04:18:54.406578 env[1306]: 2025-11-01 04:18:54.400 [INFO][4672] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:54.406578 env[1306]: 2025-11-01 04:18:54.403 [INFO][4665] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" Nov 1 04:18:54.408312 env[1306]: time="2025-11-01T04:18:54.407173079Z" level=info msg="TearDown network for sandbox \"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a\" successfully" Nov 1 04:18:54.408312 env[1306]: time="2025-11-01T04:18:54.407218131Z" level=info msg="StopPodSandbox for \"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a\" returns successfully" Nov 1 04:18:54.408722 env[1306]: time="2025-11-01T04:18:54.408686784Z" level=info msg="RemovePodSandbox for \"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a\"" Nov 1 04:18:54.409026 env[1306]: time="2025-11-01T04:18:54.408885166Z" level=info msg="Forcibly stopping sandbox \"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a\"" Nov 1 04:18:54.510624 env[1306]: time="2025-11-01T04:18:54.510574568Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:18:54.511977 env[1306]: time="2025-11-01T04:18:54.511922910Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 04:18:54.526085 kubelet[2178]: E1101 04:18:54.525848 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 04:18:54.526085 kubelet[2178]: E1101 04:18:54.525931 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 04:18:54.528002 kubelet[2178]: E1101 04:18:54.526203 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gzqrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-585f46d5f6-8lv48_calico-system(10fb6fc1-31fc-4d32-a2e8-032e174f09df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 04:18:54.563896 kubelet[2178]: E1101 04:18:54.563830 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585f46d5f6-8lv48" podUID="10fb6fc1-31fc-4d32-a2e8-032e174f09df" Nov 1 04:18:54.608570 env[1306]: 2025-11-01 04:18:54.468 [WARNING][4689] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0", GenerateName:"calico-apiserver-6dbfc7dd7f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7537f60-17f7-4f3b-b511-49610ae00add", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 18, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6dbfc7dd7f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"db92ab62bc2f78917cadc51394b87f50764c2105a81f66420adbdb84162cf690", Pod:"calico-apiserver-6dbfc7dd7f-w5zxm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4dab77b6fc6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:54.608570 env[1306]: 2025-11-01 04:18:54.469 [INFO][4689] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" Nov 1 04:18:54.608570 env[1306]: 2025-11-01 04:18:54.469 [INFO][4689] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" iface="eth0" netns="" Nov 1 04:18:54.608570 env[1306]: 2025-11-01 04:18:54.469 [INFO][4689] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" Nov 1 04:18:54.608570 env[1306]: 2025-11-01 04:18:54.469 [INFO][4689] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" Nov 1 04:18:54.608570 env[1306]: 2025-11-01 04:18:54.592 [INFO][4696] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" HandleID="k8s-pod-network.acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0" Nov 1 04:18:54.608570 env[1306]: 2025-11-01 04:18:54.592 [INFO][4696] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:54.608570 env[1306]: 2025-11-01 04:18:54.593 [INFO][4696] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:54.608570 env[1306]: 2025-11-01 04:18:54.599 [WARNING][4696] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" HandleID="k8s-pod-network.acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0" Nov 1 04:18:54.608570 env[1306]: 2025-11-01 04:18:54.599 [INFO][4696] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" HandleID="k8s-pod-network.acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" Workload="srv--i9e8z.gb1.brightbox.com-k8s-calico--apiserver--6dbfc7dd7f--w5zxm-eth0" Nov 1 04:18:54.608570 env[1306]: 2025-11-01 04:18:54.601 [INFO][4696] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:54.608570 env[1306]: 2025-11-01 04:18:54.604 [INFO][4689] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a" Nov 1 04:18:54.609941 env[1306]: time="2025-11-01T04:18:54.609890419Z" level=info msg="TearDown network for sandbox \"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a\" successfully" Nov 1 04:18:54.615197 env[1306]: time="2025-11-01T04:18:54.614931345Z" level=info msg="RemovePodSandbox \"acc138ad6c01ed2ee7c5121a475290de097b54f0888c3700e0b251a3e795332a\" returns successfully" Nov 1 04:18:54.617810 env[1306]: time="2025-11-01T04:18:54.617742155Z" level=info msg="StopPodSandbox for \"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe\"" Nov 1 04:18:54.785558 env[1306]: 2025-11-01 04:18:54.713 [WARNING][4713] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5a34f1cb-7b62-4a76-8ec0-fae704ad3a6a", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 17, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504", Pod:"coredns-668d6bf9bc-jpfzq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6a311739f5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:54.785558 env[1306]: 2025-11-01 04:18:54.713 [INFO][4713] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" Nov 1 04:18:54.785558 env[1306]: 2025-11-01 04:18:54.713 [INFO][4713] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" iface="eth0" netns="" Nov 1 04:18:54.785558 env[1306]: 2025-11-01 04:18:54.713 [INFO][4713] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" Nov 1 04:18:54.785558 env[1306]: 2025-11-01 04:18:54.714 [INFO][4713] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" Nov 1 04:18:54.785558 env[1306]: 2025-11-01 04:18:54.751 [INFO][4720] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" HandleID="k8s-pod-network.5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0" Nov 1 04:18:54.785558 env[1306]: 2025-11-01 04:18:54.751 [INFO][4720] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:54.785558 env[1306]: 2025-11-01 04:18:54.751 [INFO][4720] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:54.785558 env[1306]: 2025-11-01 04:18:54.760 [WARNING][4720] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" HandleID="k8s-pod-network.5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0" Nov 1 04:18:54.785558 env[1306]: 2025-11-01 04:18:54.760 [INFO][4720] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" HandleID="k8s-pod-network.5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0" Nov 1 04:18:54.785558 env[1306]: 2025-11-01 04:18:54.763 [INFO][4720] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:54.785558 env[1306]: 2025-11-01 04:18:54.783 [INFO][4713] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" Nov 1 04:18:54.786823 env[1306]: time="2025-11-01T04:18:54.785923165Z" level=info msg="TearDown network for sandbox \"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe\" successfully" Nov 1 04:18:54.786823 env[1306]: time="2025-11-01T04:18:54.785966622Z" level=info msg="StopPodSandbox for \"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe\" returns successfully" Nov 1 04:18:54.787153 env[1306]: time="2025-11-01T04:18:54.787058450Z" level=info msg="RemovePodSandbox for \"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe\"" Nov 1 04:18:54.787254 env[1306]: time="2025-11-01T04:18:54.787088369Z" level=info msg="Forcibly stopping sandbox \"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe\"" Nov 1 04:18:54.853936 env[1306]: time="2025-11-01T04:18:54.853888260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 04:18:55.070461 env[1306]: 2025-11-01 04:18:54.943 [WARNING][4735] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5a34f1cb-7b62-4a76-8ec0-fae704ad3a6a", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 17, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"8fc073e948e265c198fd2f4b046f805c3e1d471996a7e1b5ab5460ee7775d504", Pod:"coredns-668d6bf9bc-jpfzq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6a311739f5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:55.070461 env[1306]: 2025-11-01 04:18:54.943 [INFO][4735] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" Nov 1 04:18:55.070461 env[1306]: 2025-11-01 04:18:54.943 [INFO][4735] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" iface="eth0" netns="" Nov 1 04:18:55.070461 env[1306]: 2025-11-01 04:18:54.943 [INFO][4735] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" Nov 1 04:18:55.070461 env[1306]: 2025-11-01 04:18:54.943 [INFO][4735] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" Nov 1 04:18:55.070461 env[1306]: 2025-11-01 04:18:55.045 [INFO][4742] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" HandleID="k8s-pod-network.5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0" Nov 1 04:18:55.070461 env[1306]: 2025-11-01 04:18:55.046 [INFO][4742] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:55.070461 env[1306]: 2025-11-01 04:18:55.046 [INFO][4742] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:55.070461 env[1306]: 2025-11-01 04:18:55.054 [WARNING][4742] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" HandleID="k8s-pod-network.5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0" Nov 1 04:18:55.070461 env[1306]: 2025-11-01 04:18:55.054 [INFO][4742] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" HandleID="k8s-pod-network.5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" Workload="srv--i9e8z.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jpfzq-eth0" Nov 1 04:18:55.070461 env[1306]: 2025-11-01 04:18:55.066 [INFO][4742] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:55.070461 env[1306]: 2025-11-01 04:18:55.068 [INFO][4735] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe" Nov 1 04:18:55.071523 env[1306]: time="2025-11-01T04:18:55.071191262Z" level=info msg="TearDown network for sandbox \"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe\" successfully" Nov 1 04:18:55.077629 env[1306]: time="2025-11-01T04:18:55.077591506Z" level=info msg="RemovePodSandbox \"5868701a197ce21b06570caa278af444967a6e6005279573a51dfb29831579fe\" returns successfully" Nov 1 04:18:55.079257 env[1306]: time="2025-11-01T04:18:55.079231710Z" level=info msg="StopPodSandbox for \"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03\"" Nov 1 04:18:55.215198 env[1306]: 2025-11-01 04:18:55.137 [WARNING][4756] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"09a6446d-f1c6-40ae-8ffc-711e84b66ed9", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 18, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed", Pod:"goldmane-666569f655-7nddq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7c5fb42d33a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:55.215198 env[1306]: 2025-11-01 04:18:55.137 [INFO][4756] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" Nov 1 04:18:55.215198 env[1306]: 2025-11-01 04:18:55.137 [INFO][4756] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" iface="eth0" netns="" Nov 1 04:18:55.215198 env[1306]: 2025-11-01 04:18:55.137 [INFO][4756] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" Nov 1 04:18:55.215198 env[1306]: 2025-11-01 04:18:55.137 [INFO][4756] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" Nov 1 04:18:55.215198 env[1306]: 2025-11-01 04:18:55.192 [INFO][4763] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" HandleID="k8s-pod-network.8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" Workload="srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0" Nov 1 04:18:55.215198 env[1306]: 2025-11-01 04:18:55.192 [INFO][4763] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:55.215198 env[1306]: 2025-11-01 04:18:55.192 [INFO][4763] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:55.215198 env[1306]: 2025-11-01 04:18:55.206 [WARNING][4763] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" HandleID="k8s-pod-network.8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" Workload="srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0" Nov 1 04:18:55.215198 env[1306]: 2025-11-01 04:18:55.206 [INFO][4763] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" HandleID="k8s-pod-network.8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" Workload="srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0" Nov 1 04:18:55.215198 env[1306]: 2025-11-01 04:18:55.210 [INFO][4763] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:55.215198 env[1306]: 2025-11-01 04:18:55.213 [INFO][4756] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" Nov 1 04:18:55.215954 env[1306]: time="2025-11-01T04:18:55.215917916Z" level=info msg="TearDown network for sandbox \"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03\" successfully" Nov 1 04:18:55.216363 env[1306]: time="2025-11-01T04:18:55.216024250Z" level=info msg="StopPodSandbox for \"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03\" returns successfully" Nov 1 04:18:55.216795 env[1306]: time="2025-11-01T04:18:55.216771955Z" level=info msg="RemovePodSandbox for \"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03\"" Nov 1 04:18:55.216927 env[1306]: time="2025-11-01T04:18:55.216887941Z" level=info msg="Forcibly stopping sandbox \"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03\"" Nov 1 04:18:55.252828 env[1306]: time="2025-11-01T04:18:55.252772939Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:18:55.255209 env[1306]: time="2025-11-01T04:18:55.255155219Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 04:18:55.256537 kubelet[2178]: E1101 04:18:55.255721 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 04:18:55.256537 kubelet[2178]: E1101 04:18:55.255786 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 04:18:55.256537 kubelet[2178]: E1101 04:18:55.255952 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prpcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6dbfc7dd7f-64jld_calico-apiserver(cae162c8-7a97-49e7-94b4-82cbdda1df19): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 04:18:55.257653 kubelet[2178]: E1101 04:18:55.257591 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-64jld" podUID="cae162c8-7a97-49e7-94b4-82cbdda1df19" Nov 1 04:18:55.336196 env[1306]: 2025-11-01 04:18:55.282 [WARNING][4777] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"09a6446d-f1c6-40ae-8ffc-711e84b66ed9", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 4, 18, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i9e8z.gb1.brightbox.com", ContainerID:"88c5145414aedcfce7976a08c6d5299bb066d4c5c16b1bd92d50906d585a40ed", Pod:"goldmane-666569f655-7nddq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7c5fb42d33a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 04:18:55.336196 env[1306]: 2025-11-01 04:18:55.282 [INFO][4777] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" Nov 1 04:18:55.336196 env[1306]: 2025-11-01 04:18:55.282 [INFO][4777] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" iface="eth0" netns="" Nov 1 04:18:55.336196 env[1306]: 2025-11-01 04:18:55.282 [INFO][4777] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" Nov 1 04:18:55.336196 env[1306]: 2025-11-01 04:18:55.282 [INFO][4777] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" Nov 1 04:18:55.336196 env[1306]: 2025-11-01 04:18:55.315 [INFO][4784] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" HandleID="k8s-pod-network.8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" Workload="srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0" Nov 1 04:18:55.336196 env[1306]: 2025-11-01 04:18:55.315 [INFO][4784] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:55.336196 env[1306]: 2025-11-01 04:18:55.315 [INFO][4784] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:55.336196 env[1306]: 2025-11-01 04:18:55.324 [WARNING][4784] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" HandleID="k8s-pod-network.8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" Workload="srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0" Nov 1 04:18:55.336196 env[1306]: 2025-11-01 04:18:55.324 [INFO][4784] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" HandleID="k8s-pod-network.8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" Workload="srv--i9e8z.gb1.brightbox.com-k8s-goldmane--666569f655--7nddq-eth0" Nov 1 04:18:55.336196 env[1306]: 2025-11-01 04:18:55.327 [INFO][4784] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:55.336196 env[1306]: 2025-11-01 04:18:55.332 [INFO][4777] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03" Nov 1 04:18:55.337074 env[1306]: time="2025-11-01T04:18:55.336779699Z" level=info msg="TearDown network for sandbox \"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03\" successfully" Nov 1 04:18:55.340397 env[1306]: time="2025-11-01T04:18:55.340362255Z" level=info msg="RemovePodSandbox \"8aa83c62f1e07ca3b8f1d28a77466d56ec8033a6f3d2d38ab12e1c3025754e03\" returns successfully" Nov 1 04:18:55.342037 env[1306]: time="2025-11-01T04:18:55.341985745Z" level=info msg="StopPodSandbox for \"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6\"" Nov 1 04:18:55.496567 env[1306]: 2025-11-01 04:18:55.408 [WARNING][4800] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-whisker--5869b76698--z4ssd-eth0" Nov 1 04:18:55.496567 env[1306]: 2025-11-01 04:18:55.408 [INFO][4800] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" Nov 1 04:18:55.496567 env[1306]: 2025-11-01 04:18:55.408 [INFO][4800] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" iface="eth0" netns="" Nov 1 04:18:55.496567 env[1306]: 2025-11-01 04:18:55.408 [INFO][4800] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" Nov 1 04:18:55.496567 env[1306]: 2025-11-01 04:18:55.408 [INFO][4800] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" Nov 1 04:18:55.496567 env[1306]: 2025-11-01 04:18:55.468 [INFO][4807] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" HandleID="k8s-pod-network.7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" Workload="srv--i9e8z.gb1.brightbox.com-k8s-whisker--5869b76698--z4ssd-eth0" Nov 1 04:18:55.496567 env[1306]: 2025-11-01 04:18:55.469 [INFO][4807] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:55.496567 env[1306]: 2025-11-01 04:18:55.469 [INFO][4807] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:55.496567 env[1306]: 2025-11-01 04:18:55.476 [WARNING][4807] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" HandleID="k8s-pod-network.7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" Workload="srv--i9e8z.gb1.brightbox.com-k8s-whisker--5869b76698--z4ssd-eth0" Nov 1 04:18:55.496567 env[1306]: 2025-11-01 04:18:55.476 [INFO][4807] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" HandleID="k8s-pod-network.7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" Workload="srv--i9e8z.gb1.brightbox.com-k8s-whisker--5869b76698--z4ssd-eth0" Nov 1 04:18:55.496567 env[1306]: 2025-11-01 04:18:55.483 [INFO][4807] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:55.496567 env[1306]: 2025-11-01 04:18:55.488 [INFO][4800] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" Nov 1 04:18:55.498483 env[1306]: time="2025-11-01T04:18:55.496599624Z" level=info msg="TearDown network for sandbox \"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6\" successfully" Nov 1 04:18:55.498483 env[1306]: time="2025-11-01T04:18:55.496632913Z" level=info msg="StopPodSandbox for \"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6\" returns successfully" Nov 1 04:18:55.499267 env[1306]: time="2025-11-01T04:18:55.498930118Z" level=info msg="RemovePodSandbox for \"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6\"" Nov 1 04:18:55.499267 env[1306]: time="2025-11-01T04:18:55.498972740Z" level=info msg="Forcibly stopping sandbox \"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6\"" Nov 1 04:18:55.628638 env[1306]: 2025-11-01 04:18:55.577 [WARNING][4823] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" WorkloadEndpoint="srv--i9e8z.gb1.brightbox.com-k8s-whisker--5869b76698--z4ssd-eth0" Nov 1 04:18:55.628638 env[1306]: 2025-11-01 04:18:55.577 [INFO][4823] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" Nov 1 04:18:55.628638 env[1306]: 2025-11-01 04:18:55.577 [INFO][4823] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" iface="eth0" netns="" Nov 1 04:18:55.628638 env[1306]: 2025-11-01 04:18:55.577 [INFO][4823] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" Nov 1 04:18:55.628638 env[1306]: 2025-11-01 04:18:55.577 [INFO][4823] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" Nov 1 04:18:55.628638 env[1306]: 2025-11-01 04:18:55.611 [INFO][4830] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" HandleID="k8s-pod-network.7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" Workload="srv--i9e8z.gb1.brightbox.com-k8s-whisker--5869b76698--z4ssd-eth0" Nov 1 04:18:55.628638 env[1306]: 2025-11-01 04:18:55.611 [INFO][4830] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 04:18:55.628638 env[1306]: 2025-11-01 04:18:55.611 [INFO][4830] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 04:18:55.628638 env[1306]: 2025-11-01 04:18:55.620 [WARNING][4830] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" HandleID="k8s-pod-network.7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" Workload="srv--i9e8z.gb1.brightbox.com-k8s-whisker--5869b76698--z4ssd-eth0" Nov 1 04:18:55.628638 env[1306]: 2025-11-01 04:18:55.620 [INFO][4830] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" HandleID="k8s-pod-network.7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" Workload="srv--i9e8z.gb1.brightbox.com-k8s-whisker--5869b76698--z4ssd-eth0" Nov 1 04:18:55.628638 env[1306]: 2025-11-01 04:18:55.625 [INFO][4830] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 04:18:55.628638 env[1306]: 2025-11-01 04:18:55.626 [INFO][4823] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6" Nov 1 04:18:55.629655 env[1306]: time="2025-11-01T04:18:55.628596978Z" level=info msg="TearDown network for sandbox \"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6\" successfully" Nov 1 04:18:55.632283 env[1306]: time="2025-11-01T04:18:55.632240600Z" level=info msg="RemovePodSandbox \"7bff7470b1add4f737773b2956d43058a0e40fee60b1f8c3ea64ba206744c2d6\" returns successfully" Nov 1 04:18:55.845903 env[1306]: time="2025-11-01T04:18:55.845848397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 04:18:56.153149 env[1306]: time="2025-11-01T04:18:56.152993553Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:18:56.154066 env[1306]: time="2025-11-01T04:18:56.153797486Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 04:18:56.154494 kubelet[2178]: E1101 04:18:56.154427 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 04:18:56.154764 kubelet[2178]: E1101 04:18:56.154724 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 04:18:56.155246 kubelet[2178]: E1101 04:18:56.155131 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hqpx6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6dbfc7dd7f-w5zxm_calico-apiserver(e7537f60-17f7-4f3b-b511-49610ae00add): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 04:18:56.157696 kubelet[2178]: E1101 04:18:56.157637 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-w5zxm" podUID="e7537f60-17f7-4f3b-b511-49610ae00add" Nov 1 04:18:57.838518 env[1306]: time="2025-11-01T04:18:57.838238782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 04:18:58.146001 env[1306]: time="2025-11-01T04:18:58.145594370Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:18:58.147252 env[1306]: time="2025-11-01T04:18:58.147016869Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 04:18:58.147938 kubelet[2178]: E1101 04:18:58.147826 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 04:18:58.148952 kubelet[2178]: E1101 04:18:58.148896 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 04:18:58.149553 kubelet[2178]: E1101 04:18:58.149421 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxmxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5857878c7b-k98b2_calico-system(875389eb-2978-4aa4-ad6e-7b619ce206e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 04:18:58.152017 kubelet[2178]: E1101 04:18:58.151952 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5857878c7b-k98b2" podUID="875389eb-2978-4aa4-ad6e-7b619ce206e3" Nov 1 04:19:03.838130 kubelet[2178]: E1101 04:19:03.837999 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7nddq" podUID="09a6446d-f1c6-40ae-8ffc-711e84b66ed9" Nov 1 04:19:05.839087 kubelet[2178]: E1101 04:19:05.839007 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585f46d5f6-8lv48" podUID="10fb6fc1-31fc-4d32-a2e8-032e174f09df" Nov 1 04:19:06.839268 kubelet[2178]: E1101 04:19:06.839205 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-w5zxm" podUID="e7537f60-17f7-4f3b-b511-49610ae00add" Nov 1 04:19:06.840882 kubelet[2178]: E1101 04:19:06.840825 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4ccfj" podUID="f7f98be6-9d36-4c44-bedf-cd179c76bbfe" Nov 1 04:19:08.838297 kubelet[2178]: E1101 04:19:08.838252 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-64jld" podUID="cae162c8-7a97-49e7-94b4-82cbdda1df19" Nov 1 04:19:09.839635 kubelet[2178]: E1101 04:19:09.839541 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5857878c7b-k98b2" podUID="875389eb-2978-4aa4-ad6e-7b619ce206e3" Nov 1 04:19:11.431744 systemd[1]: run-containerd-runc-k8s.io-6a993f3df2ffe2d60d4e0201f210657e0876693cb90b43227daec1c05a01b439-runc.36xROq.mount: Deactivated successfully. Nov 1 04:19:15.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.244.102.154:22-139.178.89.65:55294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:15.216468 kernel: audit: type=1130 audit(1761970755.194:425): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.244.102.154:22-139.178.89.65:55294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:15.195229 systemd[1]: Started sshd@9-10.244.102.154:22-139.178.89.65:55294.service. Nov 1 04:19:16.182765 sshd[4868]: Accepted publickey for core from 139.178.89.65 port 55294 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:19:16.180000 audit[4868]: USER_ACCT pid=4868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:16.190847 kernel: audit: type=1101 audit(1761970756.180:426): pid=4868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:16.191878 sshd[4868]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:19:16.189000 audit[4868]: CRED_ACQ pid=4868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:16.198351 kernel: audit: type=1103 audit(1761970756.189:427): pid=4868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:16.206726 kernel: audit: type=1006 audit(1761970756.189:428): pid=4868 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Nov 1 04:19:16.208111 kernel: audit: type=1300 audit(1761970756.189:428): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffab38d9b0 a2=3 a3=0 items=0 ppid=1 pid=4868 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:19:16.189000 audit[4868]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffab38d9b0 a2=3 a3=0 items=0 ppid=1 pid=4868 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:19:16.189000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 04:19:16.216490 kernel: audit: type=1327 audit(1761970756.189:428): proctitle=737368643A20636F7265205B707269765D Nov 1 04:19:16.222390 systemd-logind[1298]: New session 10 of user core. Nov 1 04:19:16.223367 systemd[1]: Started session-10.scope. Nov 1 04:19:16.227000 audit[4868]: USER_START pid=4868 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:16.235480 kernel: audit: type=1105 audit(1761970756.227:429): pid=4868 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:16.229000 audit[4871]: CRED_ACQ pid=4871 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:16.240402 kernel: audit: type=1103 audit(1761970756.229:430): pid=4871 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:16.922278 env[1306]: time="2025-11-01T04:19:16.921903794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 04:19:17.258148 env[1306]: time="2025-11-01T04:19:17.257886471Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:19:17.258770 env[1306]: time="2025-11-01T04:19:17.258702059Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 04:19:17.282201 kubelet[2178]: E1101 04:19:17.282118 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 04:19:17.287463 kubelet[2178]: E1101 04:19:17.287364 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 04:19:17.287779 kubelet[2178]: E1101 04:19:17.287577 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b3e36ba512a94ba9aab826249dfe86b5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gzqrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-585f46d5f6-8lv48_calico-system(10fb6fc1-31fc-4d32-a2e8-032e174f09df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 04:19:17.295790 env[1306]: time="2025-11-01T04:19:17.293731555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 04:19:17.576114 sshd[4868]: pam_unix(sshd:session): session closed for user core Nov 1 04:19:17.578000 audit[4868]: USER_END pid=4868 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:17.590353 kernel: audit: type=1106 audit(1761970757.578:431): pid=4868 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:17.593056 systemd[1]: sshd@9-10.244.102.154:22-139.178.89.65:55294.service: Deactivated successfully. Nov 1 04:19:17.594071 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 04:19:17.594490 systemd-logind[1298]: Session 10 logged out. Waiting for processes to exit. Nov 1 04:19:17.595407 systemd-logind[1298]: Removed session 10. Nov 1 04:19:17.589000 audit[4868]: CRED_DISP pid=4868 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:17.600349 kernel: audit: type=1104 audit(1761970757.589:432): pid=4868 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:17.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.244.102.154:22-139.178.89.65:55294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:17.611555 env[1306]: time="2025-11-01T04:19:17.611354737Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:19:17.612342 env[1306]: time="2025-11-01T04:19:17.612216007Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 04:19:17.612743 kubelet[2178]: E1101 04:19:17.612689 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 04:19:17.612919 kubelet[2178]: E1101 04:19:17.612897 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 04:19:17.613285 kubelet[2178]: E1101 04:19:17.613239 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gzqrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-585f46d5f6-8lv48_calico-system(10fb6fc1-31fc-4d32-a2e8-032e174f09df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 04:19:17.618924 kubelet[2178]: E1101 04:19:17.618808 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585f46d5f6-8lv48" podUID="10fb6fc1-31fc-4d32-a2e8-032e174f09df" Nov 1 04:19:17.840161 env[1306]: time="2025-11-01T04:19:17.839828625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 04:19:18.150936 env[1306]: time="2025-11-01T04:19:18.150821791Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:19:18.163385 env[1306]: time="2025-11-01T04:19:18.161552435Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 04:19:18.163565 kubelet[2178]: E1101 04:19:18.162370 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 04:19:18.163565 kubelet[2178]: E1101 04:19:18.162551 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 04:19:18.163565 kubelet[2178]: E1101 04:19:18.163192 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hz7s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7nddq_calico-system(09a6446d-f1c6-40ae-8ffc-711e84b66ed9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 04:19:18.165489 kubelet[2178]: E1101 04:19:18.165415 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7nddq" podUID="09a6446d-f1c6-40ae-8ffc-711e84b66ed9" Nov 1 04:19:19.846937 env[1306]: time="2025-11-01T04:19:19.846877198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 04:19:20.157092 env[1306]: time="2025-11-01T04:19:20.156997567Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:19:20.158094 env[1306]: time="2025-11-01T04:19:20.158032424Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 04:19:20.158644 kubelet[2178]: E1101 04:19:20.158568 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 04:19:20.159534 kubelet[2178]: E1101 04:19:20.159490 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 04:19:20.159984 kubelet[2178]: E1101 04:19:20.159907 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fwflm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4ccfj_calico-system(f7f98be6-9d36-4c44-bedf-cd179c76bbfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 04:19:20.163879 env[1306]: time="2025-11-01T04:19:20.163835104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 04:19:20.468710 env[1306]: time="2025-11-01T04:19:20.468401322Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:19:20.470445 env[1306]: time="2025-11-01T04:19:20.470298552Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 04:19:20.470967 kubelet[2178]: E1101 04:19:20.470862 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 04:19:20.471252 kubelet[2178]: E1101 04:19:20.470996 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 04:19:20.471486 kubelet[2178]: E1101 04:19:20.471389 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fwflm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4ccfj_calico-system(f7f98be6-9d36-4c44-bedf-cd179c76bbfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 04:19:20.473216 kubelet[2178]: E1101 04:19:20.473112 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4ccfj" podUID="f7f98be6-9d36-4c44-bedf-cd179c76bbfe" Nov 1 04:19:20.844469 env[1306]: time="2025-11-01T04:19:20.842675561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 04:19:21.519257 env[1306]: time="2025-11-01T04:19:21.518844197Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:19:21.520622 env[1306]: time="2025-11-01T04:19:21.520456978Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 04:19:21.520870 kubelet[2178]: E1101 04:19:21.520815 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 04:19:21.521200 kubelet[2178]: E1101 04:19:21.520893 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 04:19:21.521200 kubelet[2178]: E1101 04:19:21.521107 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hqpx6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6dbfc7dd7f-w5zxm_calico-apiserver(e7537f60-17f7-4f3b-b511-49610ae00add): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 04:19:21.522897 kubelet[2178]: E1101 04:19:21.522855 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-w5zxm" podUID="e7537f60-17f7-4f3b-b511-49610ae00add" Nov 1 04:19:22.726615 systemd[1]: Started sshd@10-10.244.102.154:22-139.178.89.65:47140.service. Nov 1 04:19:22.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.244.102.154:22-139.178.89.65:47140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:22.734727 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 04:19:22.734934 kernel: audit: type=1130 audit(1761970762.727:434): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.244.102.154:22-139.178.89.65:47140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:22.852135 env[1306]: time="2025-11-01T04:19:22.851794058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 04:19:23.162129 env[1306]: time="2025-11-01T04:19:23.162065785Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:19:23.163279 env[1306]: time="2025-11-01T04:19:23.163173975Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 04:19:23.163706 kubelet[2178]: E1101 04:19:23.163648 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 04:19:23.164123 kubelet[2178]: E1101 04:19:23.163732 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 04:19:23.164123 kubelet[2178]: E1101 04:19:23.163929 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxmxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5857878c7b-k98b2_calico-system(875389eb-2978-4aa4-ad6e-7b619ce206e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 04:19:23.165547 kubelet[2178]: E1101 04:19:23.165506 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5857878c7b-k98b2" podUID="875389eb-2978-4aa4-ad6e-7b619ce206e3" Nov 1 04:19:23.668000 audit[4890]: USER_ACCT pid=4890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:23.687633 kernel: audit: type=1101 audit(1761970763.668:435): pid=4890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:23.687858 sshd[4890]: Accepted publickey for core from 139.178.89.65 port 47140 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:19:23.687000 audit[4890]: CRED_ACQ pid=4890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:23.695180 kernel: audit: type=1103 audit(1761970763.687:436): pid=4890 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:23.695292 kernel: audit: type=1006 audit(1761970763.688:437): pid=4890 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Nov 1 04:19:23.695345 kernel: audit: type=1300 audit(1761970763.688:437): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc761eb650 a2=3 a3=0 items=0 ppid=1 pid=4890 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:19:23.688000 audit[4890]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc761eb650 a2=3 a3=0 items=0 ppid=1 pid=4890 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:19:23.695795 sshd[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:19:23.688000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 04:19:23.709423 kernel: audit: type=1327 audit(1761970763.688:437): proctitle=737368643A20636F7265205B707269765D Nov 1 04:19:23.712024 systemd-logind[1298]: New session 11 of user core. Nov 1 04:19:23.712894 systemd[1]: Started session-11.scope. Nov 1 04:19:23.721000 audit[4890]: USER_START pid=4890 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:23.728002 kernel: audit: type=1105 audit(1761970763.721:438): pid=4890 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:23.726000 audit[4893]: CRED_ACQ pid=4893 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:23.734424 kernel: audit: type=1103 audit(1761970763.726:439): pid=4893 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:23.844392 env[1306]: time="2025-11-01T04:19:23.844026755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 04:19:24.157474 env[1306]: time="2025-11-01T04:19:24.157417856Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:19:24.158236 env[1306]: time="2025-11-01T04:19:24.158188152Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 04:19:24.158473 kubelet[2178]: E1101 04:19:24.158431 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 04:19:24.158587 kubelet[2178]: E1101 04:19:24.158493 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 04:19:24.158699 kubelet[2178]: E1101 04:19:24.158661 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prpcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6dbfc7dd7f-64jld_calico-apiserver(cae162c8-7a97-49e7-94b4-82cbdda1df19): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 04:19:24.160117 kubelet[2178]: E1101 04:19:24.160083 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-64jld" podUID="cae162c8-7a97-49e7-94b4-82cbdda1df19" Nov 1 04:19:24.690250 sshd[4890]: pam_unix(sshd:session): session closed for user core Nov 1 04:19:24.691000 audit[4890]: USER_END pid=4890 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:24.700352 kernel: audit: type=1106 audit(1761970764.691:440): pid=4890 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:24.699000 audit[4890]: CRED_DISP pid=4890 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:24.704349 kernel: audit: type=1104 audit(1761970764.699:441): pid=4890 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:24.704944 systemd[1]: sshd@10-10.244.102.154:22-139.178.89.65:47140.service: Deactivated successfully. Nov 1 04:19:24.706339 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 04:19:24.706742 systemd-logind[1298]: Session 11 logged out. Waiting for processes to exit. Nov 1 04:19:24.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.244.102.154:22-139.178.89.65:47140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:24.707738 systemd-logind[1298]: Removed session 11. Nov 1 04:19:28.844693 kubelet[2178]: E1101 04:19:28.844492 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7nddq" podUID="09a6446d-f1c6-40ae-8ffc-711e84b66ed9" Nov 1 04:19:29.837307 systemd[1]: Started sshd@11-10.244.102.154:22-139.178.89.65:52534.service. Nov 1 04:19:29.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.244.102.154:22-139.178.89.65:52534 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:29.839555 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 04:19:29.839671 kernel: audit: type=1130 audit(1761970769.836:443): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.244.102.154:22-139.178.89.65:52534 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:29.844562 kubelet[2178]: E1101 04:19:29.844530 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585f46d5f6-8lv48" podUID="10fb6fc1-31fc-4d32-a2e8-032e174f09df" Nov 1 04:19:30.761595 sshd[4905]: Accepted publickey for core from 139.178.89.65 port 52534 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:19:30.786269 kernel: audit: type=1101 audit(1761970770.758:444): pid=4905 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:30.758000 audit[4905]: USER_ACCT pid=4905 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:30.785819 sshd[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:19:30.793489 kernel: audit: type=1103 audit(1761970770.783:445): pid=4905 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:30.783000 audit[4905]: CRED_ACQ pid=4905 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:30.797185 systemd-logind[1298]: New session 12 of user core. Nov 1 04:19:30.804084 kernel: audit: type=1006 audit(1761970770.783:446): pid=4905 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Nov 1 04:19:30.804165 kernel: audit: type=1300 audit(1761970770.783:446): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb8aac0c0 a2=3 a3=0 items=0 ppid=1 pid=4905 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:19:30.783000 audit[4905]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb8aac0c0 a2=3 a3=0 items=0 ppid=1 pid=4905 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:19:30.801491 systemd[1]: Started session-12.scope. Nov 1 04:19:30.783000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 04:19:30.806334 kernel: audit: type=1327 audit(1761970770.783:446): proctitle=737368643A20636F7265205B707269765D Nov 1 04:19:30.813000 audit[4905]: USER_START pid=4905 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:30.822420 kernel: audit: type=1105 audit(1761970770.813:447): pid=4905 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:30.822000 audit[4908]: CRED_ACQ pid=4908 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:30.828408 kernel: audit: type=1103 audit(1761970770.822:448): pid=4908 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:31.487426 sshd[4905]: pam_unix(sshd:session): session closed for user core Nov 1 04:19:31.487000 audit[4905]: USER_END pid=4905 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:31.493606 systemd[1]: sshd@11-10.244.102.154:22-139.178.89.65:52534.service: Deactivated successfully. Nov 1 04:19:31.494532 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 04:19:31.497340 kernel: audit: type=1106 audit(1761970771.487:449): pid=4905 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:31.497953 systemd-logind[1298]: Session 12 logged out. Waiting for processes to exit. Nov 1 04:19:31.488000 audit[4905]: CRED_DISP pid=4905 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:31.502524 kernel: audit: type=1104 audit(1761970771.488:450): pid=4905 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:31.502100 systemd-logind[1298]: Removed session 12. Nov 1 04:19:31.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.244.102.154:22-139.178.89.65:52534 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:31.635949 systemd[1]: Started sshd@12-10.244.102.154:22-139.178.89.65:52542.service. Nov 1 04:19:31.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.244.102.154:22-139.178.89.65:52542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:32.548654 sshd[4919]: Accepted publickey for core from 139.178.89.65 port 52542 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:19:32.547000 audit[4919]: USER_ACCT pid=4919 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:32.551411 sshd[4919]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:19:32.549000 audit[4919]: CRED_ACQ pid=4919 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:32.549000 audit[4919]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd74b2e090 a2=3 a3=0 items=0 ppid=1 pid=4919 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:19:32.549000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 04:19:32.557797 systemd[1]: Started session-13.scope. Nov 1 04:19:32.558020 systemd-logind[1298]: New session 13 of user core. Nov 1 04:19:32.562000 audit[4919]: USER_START pid=4919 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:32.565000 audit[4922]: CRED_ACQ pid=4922 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:33.413306 sshd[4919]: pam_unix(sshd:session): session closed for user core Nov 1 04:19:33.414000 audit[4919]: USER_END pid=4919 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:33.415000 audit[4919]: CRED_DISP pid=4919 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:33.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.244.102.154:22-139.178.89.65:52542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:33.418644 systemd[1]: sshd@12-10.244.102.154:22-139.178.89.65:52542.service: Deactivated successfully. Nov 1 04:19:33.419991 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 04:19:33.420033 systemd-logind[1298]: Session 13 logged out. Waiting for processes to exit. Nov 1 04:19:33.421237 systemd-logind[1298]: Removed session 13. Nov 1 04:19:33.555160 systemd[1]: Started sshd@13-10.244.102.154:22-139.178.89.65:52546.service. Nov 1 04:19:33.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.244.102.154:22-139.178.89.65:52546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:34.848641 kubelet[2178]: E1101 04:19:34.848587 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-w5zxm" podUID="e7537f60-17f7-4f3b-b511-49610ae00add" Nov 1 04:19:34.881500 kernel: kauditd_printk_skb: 13 callbacks suppressed Nov 1 04:19:34.882647 kernel: audit: type=1101 audit(1761970774.872:462): pid=4930 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:34.872000 audit[4930]: USER_ACCT pid=4930 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:34.882884 sshd[4930]: Accepted publickey for core from 139.178.89.65 port 52546 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:19:34.883000 audit[4930]: CRED_ACQ pid=4930 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:34.895434 kernel: audit: type=1103 audit(1761970774.883:463): pid=4930 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:34.895229 sshd[4930]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:19:34.905300 kernel: audit: type=1006 audit(1761970774.883:464): pid=4930 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Nov 1 04:19:34.905398 kernel: audit: type=1300 audit(1761970774.883:464): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed08c70c0 a2=3 a3=0 items=0 ppid=1 pid=4930 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:19:34.883000 audit[4930]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed08c70c0 a2=3 a3=0 items=0 ppid=1 pid=4930 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:19:34.883000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 04:19:34.916043 kernel: audit: type=1327 audit(1761970774.883:464): proctitle=737368643A20636F7265205B707269765D Nov 1 04:19:34.916677 systemd-logind[1298]: New session 14 of user core. Nov 1 04:19:34.917300 systemd[1]: Started session-14.scope. Nov 1 04:19:34.931000 audit[4930]: USER_START pid=4930 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:34.938503 kernel: audit: type=1105 audit(1761970774.931:465): pid=4930 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:34.936000 audit[4933]: CRED_ACQ pid=4933 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:34.942388 kernel: audit: type=1103 audit(1761970774.936:466): pid=4933 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:35.625402 sshd[4930]: pam_unix(sshd:session): session closed for user core Nov 1 04:19:35.627000 audit[4930]: USER_END pid=4930 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:35.637538 kernel: audit: type=1106 audit(1761970775.627:467): pid=4930 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:35.636963 systemd-logind[1298]: Session 14 logged out. Waiting for processes to exit. Nov 1 04:19:35.639258 systemd[1]: sshd@13-10.244.102.154:22-139.178.89.65:52546.service: Deactivated successfully. Nov 1 04:19:35.640753 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 04:19:35.642836 systemd-logind[1298]: Removed session 14. Nov 1 04:19:35.627000 audit[4930]: CRED_DISP pid=4930 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:35.647336 kernel: audit: type=1104 audit(1761970775.627:468): pid=4930 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:35.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.244.102.154:22-139.178.89.65:52546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:35.653360 kernel: audit: type=1131 audit(1761970775.638:469): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.244.102.154:22-139.178.89.65:52546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:35.843697 kubelet[2178]: E1101 04:19:35.843584 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5857878c7b-k98b2" podUID="875389eb-2978-4aa4-ad6e-7b619ce206e3" Nov 1 04:19:35.844769 kubelet[2178]: E1101 04:19:35.844717 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4ccfj" podUID="f7f98be6-9d36-4c44-bedf-cd179c76bbfe" Nov 1 04:19:36.838556 kubelet[2178]: E1101 04:19:36.838505 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-64jld" podUID="cae162c8-7a97-49e7-94b4-82cbdda1df19" Nov 1 04:19:40.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.244.102.154:22-139.178.89.65:35126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:40.777188 systemd[1]: Started sshd@14-10.244.102.154:22-139.178.89.65:35126.service. Nov 1 04:19:40.792369 kernel: audit: type=1130 audit(1761970780.777:470): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.244.102.154:22-139.178.89.65:35126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:41.430700 systemd[1]: run-containerd-runc-k8s.io-6a993f3df2ffe2d60d4e0201f210657e0876693cb90b43227daec1c05a01b439-runc.7tjeSj.mount: Deactivated successfully. Nov 1 04:19:41.717000 audit[4950]: USER_ACCT pid=4950 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:41.727679 sshd[4950]: Accepted publickey for core from 139.178.89.65 port 35126 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:19:41.733583 kernel: audit: type=1101 audit(1761970781.717:471): pid=4950 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:41.734000 audit[4950]: CRED_ACQ pid=4950 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:41.741383 kernel: audit: type=1103 audit(1761970781.734:472): pid=4950 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:41.741465 kernel: audit: type=1006 audit(1761970781.734:473): pid=4950 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Nov 1 04:19:41.734000 audit[4950]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe06631130 a2=3 a3=0 items=0 ppid=1 pid=4950 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:19:41.734000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 04:19:41.748656 kernel: audit: type=1300 audit(1761970781.734:473): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe06631130 a2=3 a3=0 items=0 ppid=1 pid=4950 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:19:41.749116 kernel: audit: type=1327 audit(1761970781.734:473): proctitle=737368643A20636F7265205B707269765D Nov 1 04:19:41.748797 sshd[4950]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:19:41.763234 systemd-logind[1298]: New session 15 of user core. Nov 1 04:19:41.764388 systemd[1]: Started session-15.scope. Nov 1 04:19:41.770000 audit[4950]: USER_START pid=4950 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:41.776459 kernel: audit: type=1105 audit(1761970781.770:474): pid=4950 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:41.777000 audit[4974]: CRED_ACQ pid=4974 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:41.782370 kernel: audit: type=1103 audit(1761970781.777:475): pid=4974 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:41.838383 kubelet[2178]: E1101 04:19:41.838299 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7nddq" podUID="09a6446d-f1c6-40ae-8ffc-711e84b66ed9" Nov 1 04:19:42.574250 sshd[4950]: pam_unix(sshd:session): session closed for user core Nov 1 04:19:42.578000 audit[4950]: USER_END pid=4950 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:42.593366 kernel: audit: type=1106 audit(1761970782.578:476): pid=4950 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:42.595598 systemd[1]: sshd@14-10.244.102.154:22-139.178.89.65:35126.service: Deactivated successfully. Nov 1 04:19:42.598825 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 04:19:42.599738 systemd-logind[1298]: Session 15 logged out. Waiting for processes to exit. Nov 1 04:19:42.602422 systemd-logind[1298]: Removed session 15. Nov 1 04:19:42.578000 audit[4950]: CRED_DISP pid=4950 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:42.608860 kernel: audit: type=1104 audit(1761970782.578:477): pid=4950 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:42.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.244.102.154:22-139.178.89.65:35126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:43.839420 kubelet[2178]: E1101 04:19:43.839368 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585f46d5f6-8lv48" podUID="10fb6fc1-31fc-4d32-a2e8-032e174f09df" Nov 1 04:19:47.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.244.102.154:22-139.178.89.65:52092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:47.721911 systemd[1]: Started sshd@15-10.244.102.154:22-139.178.89.65:52092.service. Nov 1 04:19:47.725435 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 04:19:47.725548 kernel: audit: type=1130 audit(1761970787.720:479): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.244.102.154:22-139.178.89.65:52092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:47.838139 kubelet[2178]: E1101 04:19:47.838061 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5857878c7b-k98b2" podUID="875389eb-2978-4aa4-ad6e-7b619ce206e3" Nov 1 04:19:48.634000 audit[4983]: USER_ACCT pid=4983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:48.636153 sshd[4983]: Accepted publickey for core from 139.178.89.65 port 52092 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:19:48.640349 kernel: audit: type=1101 audit(1761970788.634:480): pid=4983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:48.640000 audit[4983]: CRED_ACQ pid=4983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:48.642028 sshd[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:19:48.645355 kernel: audit: type=1103 audit(1761970788.640:481): pid=4983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:48.652217 systemd[1]: Started session-16.scope. Nov 1 04:19:48.652478 systemd-logind[1298]: New session 16 of user core. Nov 1 04:19:48.658342 kernel: audit: type=1006 audit(1761970788.640:482): pid=4983 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Nov 1 04:19:48.640000 audit[4983]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc39f97220 a2=3 a3=0 items=0 ppid=1 pid=4983 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:19:48.666346 kernel: audit: type=1300 audit(1761970788.640:482): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc39f97220 a2=3 a3=0 items=0 ppid=1 pid=4983 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:19:48.640000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 04:19:48.676344 kernel: audit: type=1327 audit(1761970788.640:482): proctitle=737368643A20636F7265205B707269765D Nov 1 04:19:48.661000 audit[4983]: USER_START pid=4983 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:48.682344 kernel: audit: type=1105 audit(1761970788.661:483): pid=4983 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:48.666000 audit[4986]: CRED_ACQ pid=4986 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:48.692345 kernel: audit: type=1103 audit(1761970788.666:484): pid=4986 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:48.839717 kubelet[2178]: E1101 04:19:48.839626 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-w5zxm" podUID="e7537f60-17f7-4f3b-b511-49610ae00add" Nov 1 04:19:49.415228 sshd[4983]: pam_unix(sshd:session): session closed for user core Nov 1 04:19:49.417000 audit[4983]: USER_END pid=4983 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:49.426074 systemd[1]: sshd@15-10.244.102.154:22-139.178.89.65:52092.service: Deactivated successfully. Nov 1 04:19:49.426830 kernel: audit: type=1106 audit(1761970789.417:485): pid=4983 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:49.428246 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 04:19:49.428633 systemd-logind[1298]: Session 16 logged out. Waiting for processes to exit. Nov 1 04:19:49.417000 audit[4983]: CRED_DISP pid=4983 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:49.432980 systemd-logind[1298]: Removed session 16. Nov 1 04:19:49.433428 kernel: audit: type=1104 audit(1761970789.417:486): pid=4983 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:49.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.244.102.154:22-139.178.89.65:52092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:49.839453 kubelet[2178]: E1101 04:19:49.839313 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4ccfj" podUID="f7f98be6-9d36-4c44-bedf-cd179c76bbfe" Nov 1 04:19:51.839023 kubelet[2178]: E1101 04:19:51.838947 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-64jld" podUID="cae162c8-7a97-49e7-94b4-82cbdda1df19" Nov 1 04:19:53.837886 kubelet[2178]: E1101 04:19:53.837846 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7nddq" podUID="09a6446d-f1c6-40ae-8ffc-711e84b66ed9" Nov 1 04:19:54.564105 systemd[1]: Started sshd@16-10.244.102.154:22-139.178.89.65:52102.service. Nov 1 04:19:54.574466 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 04:19:54.579992 kernel: audit: type=1130 audit(1761970794.563:488): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.244.102.154:22-139.178.89.65:52102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:54.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.244.102.154:22-139.178.89.65:52102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:55.501944 sshd[4998]: Accepted publickey for core from 139.178.89.65 port 52102 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:19:55.500000 audit[4998]: USER_ACCT pid=4998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:55.510407 kernel: audit: type=1101 audit(1761970795.500:489): pid=4998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:55.511000 audit[4998]: CRED_ACQ pid=4998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:55.518545 kernel: audit: type=1103 audit(1761970795.511:490): pid=4998 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:55.513154 sshd[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:19:55.527507 kernel: audit: type=1006 audit(1761970795.511:491): pid=4998 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Nov 1 04:19:55.527610 kernel: audit: type=1300 audit(1761970795.511:491): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd915831f0 a2=3 a3=0 items=0 ppid=1 pid=4998 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:19:55.511000 audit[4998]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd915831f0 a2=3 a3=0 items=0 ppid=1 pid=4998 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:19:55.532473 systemd[1]: Started session-17.scope. Nov 1 04:19:55.532815 systemd-logind[1298]: New session 17 of user core. Nov 1 04:19:55.511000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 04:19:55.538361 kernel: audit: type=1327 audit(1761970795.511:491): proctitle=737368643A20636F7265205B707269765D Nov 1 04:19:55.540000 audit[4998]: USER_START pid=4998 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:55.558357 kernel: audit: type=1105 audit(1761970795.540:492): pid=4998 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:55.542000 audit[5001]: CRED_ACQ pid=5001 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:55.579351 kernel: audit: type=1103 audit(1761970795.542:493): pid=5001 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:56.308794 sshd[4998]: pam_unix(sshd:session): session closed for user core Nov 1 04:19:56.310000 audit[4998]: USER_END pid=4998 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:56.320348 kernel: audit: type=1106 audit(1761970796.310:494): pid=4998 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:56.321998 systemd[1]: sshd@16-10.244.102.154:22-139.178.89.65:52102.service: Deactivated successfully. Nov 1 04:19:56.323185 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 04:19:56.318000 audit[4998]: CRED_DISP pid=4998 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:56.324384 systemd-logind[1298]: Session 17 logged out. Waiting for processes to exit. Nov 1 04:19:56.328521 kernel: audit: type=1104 audit(1761970796.318:495): pid=4998 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:56.329212 systemd-logind[1298]: Removed session 17. Nov 1 04:19:56.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.244.102.154:22-139.178.89.65:52102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:56.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.244.102.154:22-139.178.89.65:59254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:56.457116 systemd[1]: Started sshd@17-10.244.102.154:22-139.178.89.65:59254.service. Nov 1 04:19:56.841388 kubelet[2178]: E1101 04:19:56.841344 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585f46d5f6-8lv48" podUID="10fb6fc1-31fc-4d32-a2e8-032e174f09df" Nov 1 04:19:57.383859 sshd[5011]: Accepted publickey for core from 139.178.89.65 port 59254 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:19:57.382000 audit[5011]: USER_ACCT pid=5011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:57.385000 audit[5011]: CRED_ACQ pid=5011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:57.385000 audit[5011]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe466ec6c0 a2=3 a3=0 items=0 ppid=1 pid=5011 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:19:57.385000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 04:19:57.388033 sshd[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:19:57.395813 systemd-logind[1298]: New session 18 of user core. Nov 1 04:19:57.396771 systemd[1]: Started session-18.scope. Nov 1 04:19:57.402000 audit[5011]: USER_START pid=5011 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:57.404000 audit[5016]: CRED_ACQ pid=5016 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:58.448928 sshd[5011]: pam_unix(sshd:session): session closed for user core Nov 1 04:19:58.449000 audit[5011]: USER_END pid=5011 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:58.450000 audit[5011]: CRED_DISP pid=5011 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:58.453775 systemd[1]: sshd@17-10.244.102.154:22-139.178.89.65:59254.service: Deactivated successfully. Nov 1 04:19:58.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.244.102.154:22-139.178.89.65:59254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:58.455055 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 04:19:58.455270 systemd-logind[1298]: Session 18 logged out. Waiting for processes to exit. Nov 1 04:19:58.457537 systemd-logind[1298]: Removed session 18. Nov 1 04:19:58.596534 systemd[1]: Started sshd@18-10.244.102.154:22-139.178.89.65:59270.service. Nov 1 04:19:58.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.244.102.154:22-139.178.89.65:59270 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:19:59.511838 sshd[5024]: Accepted publickey for core from 139.178.89.65 port 59270 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:19:59.510000 audit[5024]: USER_ACCT pid=5024 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:59.512000 audit[5024]: CRED_ACQ pid=5024 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:59.512000 audit[5024]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca0b5cf40 a2=3 a3=0 items=0 ppid=1 pid=5024 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:19:59.512000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 04:19:59.514701 sshd[5024]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:19:59.519766 systemd-logind[1298]: New session 19 of user core. Nov 1 04:19:59.520300 systemd[1]: Started session-19.scope. Nov 1 04:19:59.530000 audit[5024]: USER_START pid=5024 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:19:59.532000 audit[5027]: CRED_ACQ pid=5027 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:00.839976 kubelet[2178]: E1101 04:20:00.839930 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-w5zxm" podUID="e7537f60-17f7-4f3b-b511-49610ae00add" Nov 1 04:20:01.027000 audit[5044]: NETFILTER_CFG table=filter:118 family=2 entries=14 op=nft_register_rule pid=5044 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:20:01.033856 kernel: kauditd_printk_skb: 20 callbacks suppressed Nov 1 04:20:01.033955 kernel: audit: type=1325 audit(1761970801.027:512): table=filter:118 family=2 entries=14 op=nft_register_rule pid=5044 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:20:01.027000 audit[5044]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdb9848be0 a2=0 a3=7ffdb9848bcc items=0 ppid=2289 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:20:01.041344 kernel: audit: type=1300 audit(1761970801.027:512): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdb9848be0 a2=0 a3=7ffdb9848bcc items=0 ppid=2289 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:20:01.027000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:20:01.044350 kernel: audit: type=1327 audit(1761970801.027:512): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:20:01.043000 audit[5044]: NETFILTER_CFG table=nat:119 family=2 entries=20 op=nft_register_rule pid=5044 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:20:01.047346 kernel: audit: type=1325 audit(1761970801.043:513): table=nat:119 family=2 entries=20 op=nft_register_rule pid=5044 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:20:01.043000 audit[5044]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdb9848be0 a2=0 a3=7ffdb9848bcc items=0 ppid=2289 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:20:01.043000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:20:01.054304 kernel: audit: type=1300 audit(1761970801.043:513): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdb9848be0 a2=0 a3=7ffdb9848bcc items=0 ppid=2289 pid=5044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:20:01.054423 kernel: audit: type=1327 audit(1761970801.043:513): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:20:01.060834 sshd[5024]: pam_unix(sshd:session): session closed for user core Nov 1 04:20:01.062000 audit[5024]: USER_END pid=5024 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:01.069356 kernel: audit: type=1106 audit(1761970801.062:514): pid=5024 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:01.070855 systemd[1]: sshd@18-10.244.102.154:22-139.178.89.65:59270.service: Deactivated successfully. Nov 1 04:20:01.067000 audit[5024]: CRED_DISP pid=5024 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:01.072368 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 04:20:01.072383 systemd-logind[1298]: Session 19 logged out. Waiting for processes to exit. Nov 1 04:20:01.076353 kernel: audit: type=1104 audit(1761970801.067:515): pid=5024 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:01.077374 systemd-logind[1298]: Removed session 19. Nov 1 04:20:01.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.244.102.154:22-139.178.89.65:59270 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:20:01.082360 kernel: audit: type=1131 audit(1761970801.069:516): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.244.102.154:22-139.178.89.65:59270 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:20:01.088000 audit[5048]: NETFILTER_CFG table=filter:120 family=2 entries=26 op=nft_register_rule pid=5048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:20:01.093456 kernel: audit: type=1325 audit(1761970801.088:517): table=filter:120 family=2 entries=26 op=nft_register_rule pid=5048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:20:01.088000 audit[5048]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffcc1f9f370 a2=0 a3=7ffcc1f9f35c items=0 ppid=2289 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:20:01.088000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:20:01.095000 audit[5048]: NETFILTER_CFG table=nat:121 family=2 entries=20 op=nft_register_rule pid=5048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:20:01.095000 audit[5048]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcc1f9f370 a2=0 a3=0 items=0 ppid=2289 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:20:01.095000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:20:01.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.244.102.154:22-139.178.89.65:59284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:20:01.207562 systemd[1]: Started sshd@19-10.244.102.154:22-139.178.89.65:59284.service. Nov 1 04:20:02.127000 audit[5049]: USER_ACCT pid=5049 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:02.128760 sshd[5049]: Accepted publickey for core from 139.178.89.65 port 59284 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:20:02.129000 audit[5049]: CRED_ACQ pid=5049 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:02.130000 audit[5049]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff08c85690 a2=3 a3=0 items=0 ppid=1 pid=5049 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:20:02.130000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 04:20:02.132187 sshd[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:20:02.139352 systemd[1]: Started session-20.scope. Nov 1 04:20:02.140918 systemd-logind[1298]: New session 20 of user core. Nov 1 04:20:02.146000 audit[5049]: USER_START pid=5049 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:02.150000 audit[5052]: CRED_ACQ pid=5052 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:02.853404 kubelet[2178]: E1101 04:20:02.853356 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-64jld" podUID="cae162c8-7a97-49e7-94b4-82cbdda1df19" Nov 1 04:20:02.862108 kubelet[2178]: E1101 04:20:02.862002 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5857878c7b-k98b2" podUID="875389eb-2978-4aa4-ad6e-7b619ce206e3" Nov 1 04:20:02.879141 env[1306]: time="2025-11-01T04:20:02.877553102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 04:20:03.185632 env[1306]: time="2025-11-01T04:20:03.185550466Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:20:03.186668 env[1306]: time="2025-11-01T04:20:03.186587120Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 04:20:03.188502 kubelet[2178]: E1101 04:20:03.187140 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 04:20:03.188825 kubelet[2178]: E1101 04:20:03.188501 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 04:20:03.189868 kubelet[2178]: E1101 04:20:03.189795 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fwflm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4ccfj_calico-system(f7f98be6-9d36-4c44-bedf-cd179c76bbfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 04:20:03.192583 env[1306]: time="2025-11-01T04:20:03.192546793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 04:20:03.335201 sshd[5049]: pam_unix(sshd:session): session closed for user core Nov 1 04:20:03.337000 audit[5049]: USER_END pid=5049 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:03.338000 audit[5049]: CRED_DISP pid=5049 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:03.341760 systemd[1]: sshd@19-10.244.102.154:22-139.178.89.65:59284.service: Deactivated successfully. Nov 1 04:20:03.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.244.102.154:22-139.178.89.65:59284 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:20:03.343138 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 04:20:03.344632 systemd-logind[1298]: Session 20 logged out. Waiting for processes to exit. Nov 1 04:20:03.345681 systemd-logind[1298]: Removed session 20. Nov 1 04:20:03.480620 systemd[1]: Started sshd@20-10.244.102.154:22-139.178.89.65:59286.service. Nov 1 04:20:03.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.244.102.154:22-139.178.89.65:59286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:20:03.568211 env[1306]: time="2025-11-01T04:20:03.566943375Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:20:03.574378 env[1306]: time="2025-11-01T04:20:03.573098023Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 04:20:03.574539 kubelet[2178]: E1101 04:20:03.573303 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 04:20:03.574539 kubelet[2178]: E1101 04:20:03.573370 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 04:20:03.574539 kubelet[2178]: E1101 04:20:03.573501 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fwflm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4ccfj_calico-system(f7f98be6-9d36-4c44-bedf-cd179c76bbfe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 04:20:03.574754 kubelet[2178]: E1101 04:20:03.574611 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4ccfj" podUID="f7f98be6-9d36-4c44-bedf-cd179c76bbfe" Nov 1 04:20:04.382000 audit[5062]: USER_ACCT pid=5062 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:04.384192 sshd[5062]: Accepted publickey for core from 139.178.89.65 port 59286 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:20:04.384000 audit[5062]: CRED_ACQ pid=5062 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:04.384000 audit[5062]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdfc3a90b0 a2=3 a3=0 items=0 ppid=1 pid=5062 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:20:04.384000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 04:20:04.386794 sshd[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:20:04.395671 systemd-logind[1298]: New session 21 of user core. Nov 1 04:20:04.396689 systemd[1]: Started session-21.scope. Nov 1 04:20:04.400000 audit[5062]: USER_START pid=5062 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:04.402000 audit[5065]: CRED_ACQ pid=5065 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:05.177017 sshd[5062]: pam_unix(sshd:session): session closed for user core Nov 1 04:20:05.177000 audit[5062]: USER_END pid=5062 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:05.177000 audit[5062]: CRED_DISP pid=5062 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:05.180841 systemd-logind[1298]: Session 21 logged out. Waiting for processes to exit. Nov 1 04:20:05.181360 systemd[1]: sshd@20-10.244.102.154:22-139.178.89.65:59286.service: Deactivated successfully. Nov 1 04:20:05.182750 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 04:20:05.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.244.102.154:22-139.178.89.65:59286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:20:05.183508 systemd-logind[1298]: Removed session 21. Nov 1 04:20:05.839951 env[1306]: time="2025-11-01T04:20:05.839673150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 04:20:06.141878 env[1306]: time="2025-11-01T04:20:06.141821647Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:20:06.142616 env[1306]: time="2025-11-01T04:20:06.142568976Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 04:20:06.142854 kubelet[2178]: E1101 04:20:06.142811 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 04:20:06.143179 kubelet[2178]: E1101 04:20:06.142866 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 04:20:06.143179 kubelet[2178]: E1101 04:20:06.143030 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hz7s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7nddq_calico-system(09a6446d-f1c6-40ae-8ffc-711e84b66ed9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 04:20:06.144575 kubelet[2178]: E1101 04:20:06.144538 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7nddq" podUID="09a6446d-f1c6-40ae-8ffc-711e84b66ed9" Nov 1 04:20:10.322291 systemd[1]: Started sshd@21-10.244.102.154:22-139.178.89.65:37628.service. Nov 1 04:20:10.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.244.102.154:22-139.178.89.65:37628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:20:10.328085 kernel: kauditd_printk_skb: 27 callbacks suppressed Nov 1 04:20:10.328218 kernel: audit: type=1130 audit(1761970810.322:537): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.244.102.154:22-139.178.89.65:37628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:20:10.844746 env[1306]: time="2025-11-01T04:20:10.844676398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 04:20:11.148679 env[1306]: time="2025-11-01T04:20:11.148489541Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:20:11.149307 env[1306]: time="2025-11-01T04:20:11.149184809Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 04:20:11.149628 kubelet[2178]: E1101 04:20:11.149577 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 04:20:11.150061 kubelet[2178]: E1101 04:20:11.150039 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 04:20:11.150313 kubelet[2178]: E1101 04:20:11.150281 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b3e36ba512a94ba9aab826249dfe86b5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gzqrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-585f46d5f6-8lv48_calico-system(10fb6fc1-31fc-4d32-a2e8-032e174f09df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 04:20:11.153092 env[1306]: time="2025-11-01T04:20:11.153059346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 04:20:11.231000 audit[5087]: USER_ACCT pid=5087 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:11.236384 sshd[5087]: Accepted publickey for core from 139.178.89.65 port 37628 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:20:11.240345 kernel: audit: type=1101 audit(1761970811.231:538): pid=5087 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:11.241838 sshd[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:20:11.247992 systemd-logind[1298]: New session 22 of user core. Nov 1 04:20:11.248660 systemd[1]: Started session-22.scope. Nov 1 04:20:11.241000 audit[5087]: CRED_ACQ pid=5087 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:11.254359 kernel: audit: type=1103 audit(1761970811.241:539): pid=5087 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:11.263389 kernel: audit: type=1006 audit(1761970811.241:540): pid=5087 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Nov 1 04:20:11.241000 audit[5087]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef56feaa0 a2=3 a3=0 items=0 ppid=1 pid=5087 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:20:11.270369 kernel: audit: type=1300 audit(1761970811.241:540): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef56feaa0 a2=3 a3=0 items=0 ppid=1 pid=5087 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:20:11.241000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 04:20:11.275339 kernel: audit: type=1327 audit(1761970811.241:540): proctitle=737368643A20636F7265205B707269765D Nov 1 04:20:11.258000 audit[5087]: USER_START pid=5087 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:11.281338 kernel: audit: type=1105 audit(1761970811.258:541): pid=5087 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:11.262000 audit[5090]: CRED_ACQ pid=5090 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:11.290356 kernel: audit: type=1103 audit(1761970811.262:542): pid=5090 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:11.312000 audit[5092]: NETFILTER_CFG table=filter:122 family=2 entries=26 op=nft_register_rule pid=5092 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:20:11.312000 audit[5092]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe1f1f0550 a2=0 a3=7ffe1f1f053c items=0 ppid=2289 pid=5092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:20:11.320299 kernel: audit: type=1325 audit(1761970811.312:543): table=filter:122 family=2 entries=26 op=nft_register_rule pid=5092 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:20:11.320414 kernel: audit: type=1300 audit(1761970811.312:543): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe1f1f0550 a2=0 a3=7ffe1f1f053c items=0 ppid=2289 pid=5092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:20:11.312000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:20:11.325000 audit[5092]: NETFILTER_CFG table=nat:123 family=2 entries=104 op=nft_register_chain pid=5092 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Nov 1 04:20:11.325000 audit[5092]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffe1f1f0550 a2=0 a3=7ffe1f1f053c items=0 ppid=2289 pid=5092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:20:11.325000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Nov 1 04:20:11.474583 env[1306]: time="2025-11-01T04:20:11.474460119Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:20:11.475639 env[1306]: time="2025-11-01T04:20:11.475583159Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 04:20:11.476029 kubelet[2178]: E1101 04:20:11.475969 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 04:20:11.476165 kubelet[2178]: E1101 04:20:11.476050 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 04:20:11.476281 kubelet[2178]: E1101 04:20:11.476231 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gzqrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-585f46d5f6-8lv48_calico-system(10fb6fc1-31fc-4d32-a2e8-032e174f09df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 04:20:11.477544 kubelet[2178]: E1101 04:20:11.477473 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585f46d5f6-8lv48" podUID="10fb6fc1-31fc-4d32-a2e8-032e174f09df" Nov 1 04:20:12.092979 sshd[5087]: pam_unix(sshd:session): session closed for user core Nov 1 04:20:12.094000 audit[5087]: USER_END pid=5087 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:12.094000 audit[5087]: CRED_DISP pid=5087 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:12.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.244.102.154:22-139.178.89.65:37628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:20:12.096983 systemd[1]: sshd@21-10.244.102.154:22-139.178.89.65:37628.service: Deactivated successfully. Nov 1 04:20:12.099002 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 04:20:12.099887 systemd-logind[1298]: Session 22 logged out. Waiting for processes to exit. Nov 1 04:20:12.100777 systemd-logind[1298]: Removed session 22. Nov 1 04:20:15.837771 env[1306]: time="2025-11-01T04:20:15.837717146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 04:20:16.149984 env[1306]: time="2025-11-01T04:20:16.149885910Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:20:16.150559 env[1306]: time="2025-11-01T04:20:16.150503349Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 04:20:16.150813 kubelet[2178]: E1101 04:20:16.150754 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 04:20:16.151638 kubelet[2178]: E1101 04:20:16.150820 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 04:20:16.151638 kubelet[2178]: E1101 04:20:16.150981 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hqpx6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6dbfc7dd7f-w5zxm_calico-apiserver(e7537f60-17f7-4f3b-b511-49610ae00add): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 04:20:16.152412 kubelet[2178]: E1101 04:20:16.152371 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-w5zxm" podUID="e7537f60-17f7-4f3b-b511-49610ae00add" Nov 1 04:20:16.841853 env[1306]: time="2025-11-01T04:20:16.841814275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 04:20:17.151863 env[1306]: time="2025-11-01T04:20:17.151814305Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:20:17.152825 env[1306]: time="2025-11-01T04:20:17.152772913Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 04:20:17.153823 kubelet[2178]: E1101 04:20:17.153100 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 04:20:17.153823 kubelet[2178]: E1101 04:20:17.153148 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 04:20:17.153823 kubelet[2178]: E1101 04:20:17.153435 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prpcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6dbfc7dd7f-64jld_calico-apiserver(cae162c8-7a97-49e7-94b4-82cbdda1df19): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 04:20:17.154624 kubelet[2178]: E1101 04:20:17.154588 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-64jld" podUID="cae162c8-7a97-49e7-94b4-82cbdda1df19" Nov 1 04:20:17.154894 env[1306]: time="2025-11-01T04:20:17.154862591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 04:20:17.237858 systemd[1]: Started sshd@22-10.244.102.154:22-139.178.89.65:39864.service. Nov 1 04:20:17.246835 kernel: kauditd_printk_skb: 7 callbacks suppressed Nov 1 04:20:17.246969 kernel: audit: type=1130 audit(1761970817.238:548): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.244.102.154:22-139.178.89.65:39864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:20:17.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.244.102.154:22-139.178.89.65:39864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:20:17.487557 env[1306]: time="2025-11-01T04:20:17.485971924Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 04:20:17.487557 env[1306]: time="2025-11-01T04:20:17.487111599Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 04:20:17.488564 kubelet[2178]: E1101 04:20:17.488450 2178 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 04:20:17.489015 kubelet[2178]: E1101 04:20:17.488962 2178 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 04:20:17.489820 kubelet[2178]: E1101 04:20:17.489683 2178 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lxmxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5857878c7b-k98b2_calico-system(875389eb-2978-4aa4-ad6e-7b619ce206e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 04:20:17.492749 kubelet[2178]: E1101 04:20:17.492661 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5857878c7b-k98b2" podUID="875389eb-2978-4aa4-ad6e-7b619ce206e3" Nov 1 04:20:17.838665 kubelet[2178]: E1101 04:20:17.838531 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4ccfj" podUID="f7f98be6-9d36-4c44-bedf-cd179c76bbfe" Nov 1 04:20:18.176000 audit[5129]: USER_ACCT pid=5129 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:18.185808 sshd[5129]: Accepted publickey for core from 139.178.89.65 port 39864 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:20:18.186336 kernel: audit: type=1101 audit(1761970818.176:549): pid=5129 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:18.188000 audit[5129]: CRED_ACQ pid=5129 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:18.192926 sshd[5129]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:20:18.194551 kernel: audit: type=1103 audit(1761970818.188:550): pid=5129 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:18.194694 kernel: audit: type=1006 audit(1761970818.188:551): pid=5129 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Nov 1 04:20:18.188000 audit[5129]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfb9fc6b0 a2=3 a3=0 items=0 ppid=1 pid=5129 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:20:18.200748 kernel: audit: type=1300 audit(1761970818.188:551): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfb9fc6b0 a2=3 a3=0 items=0 ppid=1 pid=5129 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:20:18.203368 kernel: audit: type=1327 audit(1761970818.188:551): proctitle=737368643A20636F7265205B707269765D Nov 1 04:20:18.188000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 04:20:18.206228 systemd[1]: Started session-23.scope. Nov 1 04:20:18.207421 systemd-logind[1298]: New session 23 of user core. Nov 1 04:20:18.214000 audit[5129]: USER_START pid=5129 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:18.216000 audit[5132]: CRED_ACQ pid=5132 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:18.224996 kernel: audit: type=1105 audit(1761970818.214:552): pid=5129 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:18.225140 kernel: audit: type=1103 audit(1761970818.216:553): pid=5132 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:19.112978 sshd[5129]: pam_unix(sshd:session): session closed for user core Nov 1 04:20:19.116000 audit[5129]: USER_END pid=5129 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:19.123398 kernel: audit: type=1106 audit(1761970819.116:554): pid=5129 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:19.116000 audit[5129]: CRED_DISP pid=5129 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:19.126199 systemd-logind[1298]: Session 23 logged out. Waiting for processes to exit. Nov 1 04:20:19.127375 kernel: audit: type=1104 audit(1761970819.116:555): pid=5129 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:19.127433 systemd[1]: sshd@22-10.244.102.154:22-139.178.89.65:39864.service: Deactivated successfully. Nov 1 04:20:19.128339 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 04:20:19.129607 systemd-logind[1298]: Removed session 23. Nov 1 04:20:19.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.244.102.154:22-139.178.89.65:39864 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:20:21.842079 kubelet[2178]: E1101 04:20:21.841989 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7nddq" podUID="09a6446d-f1c6-40ae-8ffc-711e84b66ed9" Nov 1 04:20:23.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.244.102.154:22-205.210.31.237:57710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:20:23.540811 systemd[1]: Started sshd@23-10.244.102.154:22-205.210.31.237:57710.service. Nov 1 04:20:23.546296 kernel: kauditd_printk_skb: 1 callbacks suppressed Nov 1 04:20:23.546376 kernel: audit: type=1130 audit(1761970823.541:557): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.244.102.154:22-205.210.31.237:57710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:20:24.262536 systemd[1]: Started sshd@24-10.244.102.154:22-139.178.89.65:39868.service. Nov 1 04:20:24.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.244.102.154:22-139.178.89.65:39868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:20:24.269365 kernel: audit: type=1130 audit(1761970824.261:558): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.244.102.154:22-139.178.89.65:39868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:20:25.170027 sshd[5143]: Accepted publickey for core from 139.178.89.65 port 39868 ssh2: RSA SHA256:V0PERg6UVsbWZGsAZFbTY/baYEpLUh6zfqFi+pvc+oM Nov 1 04:20:25.168000 audit[5143]: USER_ACCT pid=5143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:25.179344 kernel: audit: type=1101 audit(1761970825.168:559): pid=5143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:25.180491 sshd[5143]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 04:20:25.178000 audit[5143]: CRED_ACQ pid=5143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:25.189753 kernel: audit: type=1103 audit(1761970825.178:560): pid=5143 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:25.189852 kernel: audit: type=1006 audit(1761970825.178:561): pid=5143 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Nov 1 04:20:25.178000 audit[5143]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd739ff4b0 a2=3 a3=0 items=0 ppid=1 pid=5143 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:20:25.193902 kernel: audit: type=1300 audit(1761970825.178:561): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd739ff4b0 a2=3 a3=0 items=0 ppid=1 pid=5143 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 04:20:25.194446 kernel: audit: type=1327 audit(1761970825.178:561): proctitle=737368643A20636F7265205B707269765D Nov 1 04:20:25.178000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Nov 1 04:20:25.200029 systemd-logind[1298]: New session 24 of user core. Nov 1 04:20:25.201215 systemd[1]: Started session-24.scope. Nov 1 04:20:25.205000 audit[5143]: USER_START pid=5143 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:25.213469 kernel: audit: type=1105 audit(1761970825.205:562): pid=5143 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:25.212000 audit[5147]: CRED_ACQ pid=5147 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:25.217419 kernel: audit: type=1103 audit(1761970825.212:563): pid=5147 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:25.839119 kubelet[2178]: E1101 04:20:25.839058 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-585f46d5f6-8lv48" podUID="10fb6fc1-31fc-4d32-a2e8-032e174f09df" Nov 1 04:20:25.955243 sshd[5143]: pam_unix(sshd:session): session closed for user core Nov 1 04:20:25.956000 audit[5143]: USER_END pid=5143 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:25.967779 systemd[1]: sshd@24-10.244.102.154:22-139.178.89.65:39868.service: Deactivated successfully. Nov 1 04:20:25.968557 kernel: audit: type=1106 audit(1761970825.956:564): pid=5143 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:25.968976 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 04:20:25.971168 systemd-logind[1298]: Session 24 logged out. Waiting for processes to exit. Nov 1 04:20:25.972506 systemd-logind[1298]: Removed session 24. Nov 1 04:20:25.956000 audit[5143]: CRED_DISP pid=5143 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Nov 1 04:20:25.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.244.102.154:22-139.178.89.65:39868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:20:27.840775 kubelet[2178]: E1101 04:20:27.840659 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-w5zxm" podUID="e7537f60-17f7-4f3b-b511-49610ae00add" Nov 1 04:20:28.052707 sshd[5142]: Connection reset by 205.210.31.237 port 57710 [preauth] Nov 1 04:20:28.052998 systemd[1]: sshd@23-10.244.102.154:22-205.210.31.237:57710.service: Deactivated successfully. Nov 1 04:20:28.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.244.102.154:22-205.210.31.237:57710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 04:20:28.841126 kubelet[2178]: E1101 04:20:28.841002 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6dbfc7dd7f-64jld" podUID="cae162c8-7a97-49e7-94b4-82cbdda1df19" Nov 1 04:20:28.842383 kubelet[2178]: E1101 04:20:28.841715 2178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5857878c7b-k98b2" podUID="875389eb-2978-4aa4-ad6e-7b619ce206e3"