Oct 29 04:52:31.978789 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Oct 28 23:40:27 -00 2025 Oct 29 04:52:31.985298 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=201610a31b2ff0ec76573eccf3918f182ba51086e5a85b3aea8675643c4efef7 Oct 29 04:52:31.985323 kernel: BIOS-provided physical RAM map: Oct 29 04:52:31.985334 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 29 04:52:31.985344 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 29 04:52:31.985353 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 29 04:52:31.985365 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Oct 29 04:52:31.985375 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Oct 29 04:52:31.985384 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 29 04:52:31.985394 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 29 04:52:31.985408 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 29 04:52:31.985418 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 29 04:52:31.985428 kernel: NX (Execute Disable) protection: active Oct 29 04:52:31.985438 kernel: SMBIOS 2.8 present. Oct 29 04:52:31.985450 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Oct 29 04:52:31.985461 kernel: Hypervisor detected: KVM Oct 29 04:52:31.985476 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 29 04:52:31.985487 kernel: kvm-clock: cpu 0, msr 7b1a0001, primary cpu clock Oct 29 04:52:31.985497 kernel: kvm-clock: using sched offset of 4896958743 cycles Oct 29 04:52:31.985509 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 29 04:52:31.985520 kernel: tsc: Detected 2500.032 MHz processor Oct 29 04:52:31.985531 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 29 04:52:31.985542 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 29 04:52:31.985553 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Oct 29 04:52:31.985563 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 29 04:52:31.985578 kernel: Using GB pages for direct mapping Oct 29 04:52:31.985589 kernel: ACPI: Early table checksum verification disabled Oct 29 04:52:31.985600 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Oct 29 04:52:31.985610 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 04:52:31.985621 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 04:52:31.985632 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 04:52:31.985643 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Oct 29 04:52:31.985654 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 04:52:31.985664 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 04:52:31.985679 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 04:52:31.985690 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 04:52:31.985701 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Oct 29 04:52:31.985711 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Oct 29 04:52:31.985731 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Oct 29 04:52:31.985742 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Oct 29 04:52:31.985759 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Oct 29 04:52:31.985774 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Oct 29 04:52:31.985786 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Oct 29 04:52:31.985797 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 29 04:52:31.985809 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 29 04:52:31.985846 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Oct 29 04:52:31.985859 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Oct 29 04:52:31.985871 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Oct 29 04:52:31.985891 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Oct 29 04:52:31.985903 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Oct 29 04:52:31.985914 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Oct 29 04:52:31.985926 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Oct 29 04:52:31.985937 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Oct 29 04:52:31.985954 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Oct 29 04:52:31.985966 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Oct 29 04:52:31.985977 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Oct 29 04:52:31.985988 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Oct 29 04:52:31.985999 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Oct 29 04:52:31.986015 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Oct 29 04:52:31.986027 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 29 04:52:31.986038 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 29 04:52:31.986049 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Oct 29 04:52:31.986061 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Oct 29 04:52:31.986072 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Oct 29 04:52:31.986084 kernel: Zone ranges: Oct 29 04:52:31.986096 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 29 04:52:31.986107 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Oct 29 04:52:31.986124 kernel: Normal empty Oct 29 04:52:31.986135 kernel: Movable zone start for each node Oct 29 04:52:31.986146 kernel: Early memory node ranges Oct 29 04:52:31.986158 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 29 04:52:31.986169 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Oct 29 04:52:31.986180 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Oct 29 04:52:31.986192 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 29 04:52:31.986203 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 29 04:52:31.986214 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Oct 29 04:52:31.986230 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 29 04:52:31.986242 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 29 04:52:31.986262 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 29 04:52:31.986273 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 29 04:52:31.986296 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 29 04:52:31.986308 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 29 04:52:31.986319 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 29 04:52:31.986331 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 29 04:52:31.986342 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 29 04:52:31.986358 kernel: TSC deadline timer available Oct 29 04:52:31.986370 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Oct 29 04:52:31.986382 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 29 04:52:31.986393 kernel: Booting paravirtualized kernel on KVM Oct 29 04:52:31.986405 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 29 04:52:31.986416 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Oct 29 04:52:31.986428 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Oct 29 04:52:31.986439 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Oct 29 04:52:31.986451 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Oct 29 04:52:31.986466 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Oct 29 04:52:31.986478 kernel: kvm-guest: PV spinlocks enabled Oct 29 04:52:31.986489 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 29 04:52:31.986501 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Oct 29 04:52:31.986512 kernel: Policy zone: DMA32 Oct 29 04:52:31.986525 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=201610a31b2ff0ec76573eccf3918f182ba51086e5a85b3aea8675643c4efef7 Oct 29 04:52:31.986538 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 29 04:52:31.986549 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 29 04:52:31.986565 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 29 04:52:31.986577 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 29 04:52:31.986588 kernel: Memory: 1903832K/2096616K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 192524K reserved, 0K cma-reserved) Oct 29 04:52:31.986600 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Oct 29 04:52:31.986612 kernel: Kernel/User page tables isolation: enabled Oct 29 04:52:31.986623 kernel: ftrace: allocating 34614 entries in 136 pages Oct 29 04:52:31.986635 kernel: ftrace: allocated 136 pages with 2 groups Oct 29 04:52:31.986646 kernel: rcu: Hierarchical RCU implementation. Oct 29 04:52:31.986658 kernel: rcu: RCU event tracing is enabled. Oct 29 04:52:31.986674 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Oct 29 04:52:31.986686 kernel: Rude variant of Tasks RCU enabled. Oct 29 04:52:31.986698 kernel: Tracing variant of Tasks RCU enabled. Oct 29 04:52:31.986709 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 29 04:52:31.986721 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Oct 29 04:52:31.986732 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Oct 29 04:52:31.986744 kernel: random: crng init done Oct 29 04:52:31.986769 kernel: Console: colour VGA+ 80x25 Oct 29 04:52:31.986781 kernel: printk: console [tty0] enabled Oct 29 04:52:31.986793 kernel: printk: console [ttyS0] enabled Oct 29 04:52:31.986813 kernel: ACPI: Core revision 20210730 Oct 29 04:52:31.986836 kernel: APIC: Switch to symmetric I/O mode setup Oct 29 04:52:31.986853 kernel: x2apic enabled Oct 29 04:52:31.986865 kernel: Switched APIC routing to physical x2apic. Oct 29 04:52:31.986877 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Oct 29 04:52:31.986890 kernel: Calibrating delay loop (skipped) preset value.. 5000.06 BogoMIPS (lpj=2500032) Oct 29 04:52:31.986902 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 29 04:52:31.986919 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 29 04:52:31.986931 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 29 04:52:31.986942 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 29 04:52:31.986954 kernel: Spectre V2 : Mitigation: Retpolines Oct 29 04:52:31.986966 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 29 04:52:31.986978 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Oct 29 04:52:31.986990 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 29 04:52:31.987002 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Oct 29 04:52:31.987014 kernel: MDS: Mitigation: Clear CPU buffers Oct 29 04:52:31.987026 kernel: MMIO Stale Data: Unknown: No mitigations Oct 29 04:52:31.987037 kernel: SRBDS: Unknown: Dependent on hypervisor status Oct 29 04:52:31.987053 kernel: active return thunk: its_return_thunk Oct 29 04:52:31.987065 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 29 04:52:31.987077 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 29 04:52:31.987089 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 29 04:52:31.987101 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 29 04:52:31.987113 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 29 04:52:31.987125 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 29 04:52:31.987137 kernel: Freeing SMP alternatives memory: 32K Oct 29 04:52:31.987149 kernel: pid_max: default: 32768 minimum: 301 Oct 29 04:52:31.987161 kernel: LSM: Security Framework initializing Oct 29 04:52:31.987173 kernel: SELinux: Initializing. Oct 29 04:52:31.987189 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 29 04:52:31.987201 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 29 04:52:31.987213 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Oct 29 04:52:31.987225 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Oct 29 04:52:31.987237 kernel: signal: max sigframe size: 1776 Oct 29 04:52:31.987249 kernel: rcu: Hierarchical SRCU implementation. Oct 29 04:52:31.987270 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 29 04:52:31.987292 kernel: smp: Bringing up secondary CPUs ... Oct 29 04:52:31.987305 kernel: x86: Booting SMP configuration: Oct 29 04:52:31.987317 kernel: .... node #0, CPUs: #1 Oct 29 04:52:31.987335 kernel: kvm-clock: cpu 1, msr 7b1a0041, secondary cpu clock Oct 29 04:52:31.987377 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Oct 29 04:52:31.987391 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Oct 29 04:52:31.987403 kernel: smp: Brought up 1 node, 2 CPUs Oct 29 04:52:31.987415 kernel: smpboot: Max logical packages: 16 Oct 29 04:52:31.987427 kernel: smpboot: Total of 2 processors activated (10000.12 BogoMIPS) Oct 29 04:52:31.987439 kernel: devtmpfs: initialized Oct 29 04:52:31.987451 kernel: x86/mm: Memory block size: 128MB Oct 29 04:52:31.987463 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 29 04:52:31.987481 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Oct 29 04:52:31.987494 kernel: pinctrl core: initialized pinctrl subsystem Oct 29 04:52:31.987506 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 29 04:52:31.987518 kernel: audit: initializing netlink subsys (disabled) Oct 29 04:52:31.987530 kernel: audit: type=2000 audit(1761713551.040:1): state=initialized audit_enabled=0 res=1 Oct 29 04:52:31.987541 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 29 04:52:31.987553 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 29 04:52:31.987565 kernel: cpuidle: using governor menu Oct 29 04:52:31.987577 kernel: ACPI: bus type PCI registered Oct 29 04:52:31.987594 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 29 04:52:31.987606 kernel: dca service started, version 1.12.1 Oct 29 04:52:31.987618 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 29 04:52:31.987630 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Oct 29 04:52:31.987642 kernel: PCI: Using configuration type 1 for base access Oct 29 04:52:31.987654 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 29 04:52:31.987666 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 29 04:52:31.987678 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 29 04:52:31.987690 kernel: ACPI: Added _OSI(Module Device) Oct 29 04:52:31.987707 kernel: ACPI: Added _OSI(Processor Device) Oct 29 04:52:31.987719 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 29 04:52:31.987731 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 29 04:52:31.987743 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 29 04:52:31.987755 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 29 04:52:31.987767 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 29 04:52:31.987779 kernel: ACPI: Interpreter enabled Oct 29 04:52:31.987791 kernel: ACPI: PM: (supports S0 S5) Oct 29 04:52:31.987803 kernel: ACPI: Using IOAPIC for interrupt routing Oct 29 04:52:31.987829 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 29 04:52:31.987843 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 29 04:52:31.987855 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 29 04:52:31.988160 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 29 04:52:31.988337 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 29 04:52:31.988494 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 29 04:52:31.988512 kernel: PCI host bridge to bus 0000:00 Oct 29 04:52:31.988713 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 29 04:52:31.988880 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 29 04:52:31.989056 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 29 04:52:31.989224 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Oct 29 04:52:31.989393 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 29 04:52:31.989547 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Oct 29 04:52:31.989700 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 29 04:52:31.989915 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 29 04:52:31.990131 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Oct 29 04:52:31.990320 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Oct 29 04:52:31.990479 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Oct 29 04:52:31.990638 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Oct 29 04:52:31.990794 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 29 04:52:31.995131 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Oct 29 04:52:31.995317 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Oct 29 04:52:31.995504 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Oct 29 04:52:31.995667 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Oct 29 04:52:31.995853 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Oct 29 04:52:31.996016 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Oct 29 04:52:31.996211 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Oct 29 04:52:31.996406 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Oct 29 04:52:31.996606 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Oct 29 04:52:31.996763 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Oct 29 04:52:31.996965 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Oct 29 04:52:31.997123 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Oct 29 04:52:31.997307 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Oct 29 04:52:31.997479 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Oct 29 04:52:31.997646 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Oct 29 04:52:31.997814 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Oct 29 04:52:31.998007 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 29 04:52:31.998194 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Oct 29 04:52:31.998380 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Oct 29 04:52:31.998544 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Oct 29 04:52:31.998711 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Oct 29 04:52:31.998893 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 29 04:52:31.999061 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 29 04:52:31.999218 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Oct 29 04:52:31.999411 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Oct 29 04:52:31.999579 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 29 04:52:31.999742 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 29 04:52:31.999943 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 29 04:52:32.000102 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Oct 29 04:52:32.000270 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Oct 29 04:52:32.000496 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 29 04:52:32.000655 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 29 04:52:32.000862 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Oct 29 04:52:32.001041 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Oct 29 04:52:32.001209 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Oct 29 04:52:32.001412 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Oct 29 04:52:32.001575 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 29 04:52:32.001767 kernel: pci_bus 0000:02: extended config space not accessible Oct 29 04:52:32.010088 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Oct 29 04:52:32.010297 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Oct 29 04:52:32.010470 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Oct 29 04:52:32.010637 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 29 04:52:32.010883 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Oct 29 04:52:32.011054 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Oct 29 04:52:32.011248 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Oct 29 04:52:32.011476 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Oct 29 04:52:32.011639 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 29 04:52:32.011830 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Oct 29 04:52:32.012045 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Oct 29 04:52:32.012207 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Oct 29 04:52:32.012390 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Oct 29 04:52:32.012547 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 29 04:52:32.012737 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Oct 29 04:52:32.012926 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Oct 29 04:52:32.013087 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 29 04:52:32.013285 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Oct 29 04:52:32.013446 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Oct 29 04:52:32.013600 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 29 04:52:32.013809 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Oct 29 04:52:32.013982 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Oct 29 04:52:32.014146 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 29 04:52:32.014342 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Oct 29 04:52:32.014498 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Oct 29 04:52:32.014651 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 29 04:52:32.014833 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Oct 29 04:52:32.014997 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Oct 29 04:52:32.015164 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 29 04:52:32.015183 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 29 04:52:32.015196 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 29 04:52:32.015215 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 29 04:52:32.015229 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 29 04:52:32.015241 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 29 04:52:32.015254 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 29 04:52:32.015273 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 29 04:52:32.015298 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 29 04:52:32.015310 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 29 04:52:32.015323 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 29 04:52:32.015335 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 29 04:52:32.015353 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 29 04:52:32.015366 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 29 04:52:32.015378 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 29 04:52:32.015391 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 29 04:52:32.015403 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 29 04:52:32.015416 kernel: iommu: Default domain type: Translated Oct 29 04:52:32.015428 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 29 04:52:32.015598 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 29 04:52:32.015766 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 29 04:52:32.024004 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 29 04:52:32.024030 kernel: vgaarb: loaded Oct 29 04:52:32.024044 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 29 04:52:32.024057 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 29 04:52:32.024069 kernel: PTP clock support registered Oct 29 04:52:32.024081 kernel: PCI: Using ACPI for IRQ routing Oct 29 04:52:32.024093 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 29 04:52:32.024105 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 29 04:52:32.024126 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Oct 29 04:52:32.024138 kernel: clocksource: Switched to clocksource kvm-clock Oct 29 04:52:32.024150 kernel: VFS: Disk quotas dquot_6.6.0 Oct 29 04:52:32.024162 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 29 04:52:32.024174 kernel: pnp: PnP ACPI init Oct 29 04:52:32.024422 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 29 04:52:32.024445 kernel: pnp: PnP ACPI: found 5 devices Oct 29 04:52:32.024458 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 29 04:52:32.024470 kernel: NET: Registered PF_INET protocol family Oct 29 04:52:32.024500 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 29 04:52:32.024513 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 29 04:52:32.024525 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 29 04:52:32.024538 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 29 04:52:32.024550 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Oct 29 04:52:32.024563 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 29 04:52:32.024575 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 29 04:52:32.024588 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 29 04:52:32.024604 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 29 04:52:32.024617 kernel: NET: Registered PF_XDP protocol family Oct 29 04:52:32.024785 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Oct 29 04:52:32.024987 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Oct 29 04:52:32.025159 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Oct 29 04:52:32.025344 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Oct 29 04:52:32.025501 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 29 04:52:32.025670 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 29 04:52:32.025837 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 29 04:52:32.026049 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 29 04:52:32.026205 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Oct 29 04:52:32.026385 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Oct 29 04:52:32.026540 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Oct 29 04:52:32.026707 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Oct 29 04:52:32.026887 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Oct 29 04:52:32.027060 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Oct 29 04:52:32.027219 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Oct 29 04:52:32.027411 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Oct 29 04:52:32.027578 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Oct 29 04:52:32.027751 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 29 04:52:32.027933 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Oct 29 04:52:32.028092 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Oct 29 04:52:32.028283 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Oct 29 04:52:32.028453 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 29 04:52:32.028647 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Oct 29 04:52:32.028810 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Oct 29 04:52:32.028988 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Oct 29 04:52:32.029144 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 29 04:52:32.029327 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Oct 29 04:52:32.029488 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Oct 29 04:52:32.029666 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Oct 29 04:52:32.029899 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 29 04:52:32.030074 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Oct 29 04:52:32.030309 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Oct 29 04:52:32.030469 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Oct 29 04:52:32.030629 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 29 04:52:32.030789 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Oct 29 04:52:32.040244 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Oct 29 04:52:32.040435 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Oct 29 04:52:32.040608 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 29 04:52:32.040795 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Oct 29 04:52:32.040986 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Oct 29 04:52:32.041143 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Oct 29 04:52:32.041315 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 29 04:52:32.041477 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Oct 29 04:52:32.041648 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Oct 29 04:52:32.041840 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Oct 29 04:52:32.042032 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 29 04:52:32.042193 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Oct 29 04:52:32.042409 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Oct 29 04:52:32.042575 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Oct 29 04:52:32.042747 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 29 04:52:32.042934 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 29 04:52:32.043089 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 29 04:52:32.043245 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 29 04:52:32.043403 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Oct 29 04:52:32.043550 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 29 04:52:32.043707 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Oct 29 04:52:32.043913 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Oct 29 04:52:32.044114 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Oct 29 04:52:32.044289 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Oct 29 04:52:32.044493 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Oct 29 04:52:32.044669 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Oct 29 04:52:32.044834 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Oct 29 04:52:32.045016 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 29 04:52:32.045195 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Oct 29 04:52:32.045364 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Oct 29 04:52:32.045519 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 29 04:52:32.045709 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Oct 29 04:52:32.046955 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Oct 29 04:52:32.047150 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 29 04:52:32.047343 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Oct 29 04:52:32.047502 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Oct 29 04:52:32.047674 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 29 04:52:32.047827 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Oct 29 04:52:32.048005 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Oct 29 04:52:32.048167 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 29 04:52:32.048371 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Oct 29 04:52:32.048530 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Oct 29 04:52:32.048678 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 29 04:52:32.048890 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Oct 29 04:52:32.049044 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Oct 29 04:52:32.049193 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 29 04:52:32.049213 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 29 04:52:32.049227 kernel: PCI: CLS 0 bytes, default 64 Oct 29 04:52:32.049241 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 29 04:52:32.049270 kernel: software IO TLB: mapped [mem 0x0000000072000000-0x0000000076000000] (64MB) Oct 29 04:52:32.049296 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 29 04:52:32.049310 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Oct 29 04:52:32.049323 kernel: Initialise system trusted keyrings Oct 29 04:52:32.049338 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 29 04:52:32.049350 kernel: Key type asymmetric registered Oct 29 04:52:32.049363 kernel: Asymmetric key parser 'x509' registered Oct 29 04:52:32.049380 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 29 04:52:32.049394 kernel: io scheduler mq-deadline registered Oct 29 04:52:32.049412 kernel: io scheduler kyber registered Oct 29 04:52:32.049426 kernel: io scheduler bfq registered Oct 29 04:52:32.049584 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Oct 29 04:52:32.049744 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Oct 29 04:52:32.049925 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 29 04:52:32.050095 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Oct 29 04:52:32.050267 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Oct 29 04:52:32.050442 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 29 04:52:32.050626 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Oct 29 04:52:32.050771 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Oct 29 04:52:32.050929 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 29 04:52:32.051087 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Oct 29 04:52:32.051265 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Oct 29 04:52:32.051479 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 29 04:52:32.051641 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Oct 29 04:52:32.051808 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Oct 29 04:52:32.051985 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 29 04:52:32.052145 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Oct 29 04:52:32.052336 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Oct 29 04:52:32.052517 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 29 04:52:32.052714 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Oct 29 04:52:32.052927 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Oct 29 04:52:32.053100 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 29 04:52:32.053290 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Oct 29 04:52:32.053462 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Oct 29 04:52:32.053648 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 29 04:52:32.053668 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 29 04:52:32.053681 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 29 04:52:32.053694 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 29 04:52:32.053706 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 29 04:52:32.053718 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 29 04:52:32.053731 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 29 04:52:32.053749 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 29 04:52:32.053778 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 29 04:52:32.060214 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 29 04:52:32.060247 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 29 04:52:32.060421 kernel: rtc_cmos 00:03: registered as rtc0 Oct 29 04:52:32.060579 kernel: rtc_cmos 00:03: setting system clock to 2025-10-29T04:52:31 UTC (1761713551) Oct 29 04:52:32.060735 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Oct 29 04:52:32.060755 kernel: intel_pstate: CPU model not supported Oct 29 04:52:32.060769 kernel: NET: Registered PF_INET6 protocol family Oct 29 04:52:32.060791 kernel: Segment Routing with IPv6 Oct 29 04:52:32.060805 kernel: In-situ OAM (IOAM) with IPv6 Oct 29 04:52:32.060835 kernel: NET: Registered PF_PACKET protocol family Oct 29 04:52:32.060851 kernel: Key type dns_resolver registered Oct 29 04:52:32.060864 kernel: IPI shorthand broadcast: enabled Oct 29 04:52:32.060878 kernel: sched_clock: Marking stable (1011771836, 227241705)->(1545796072, -306782531) Oct 29 04:52:32.060891 kernel: registered taskstats version 1 Oct 29 04:52:32.060904 kernel: Loading compiled-in X.509 certificates Oct 29 04:52:32.060917 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 88bc8a4d729b2f514b4a44a35b666d3248ded14a' Oct 29 04:52:32.060946 kernel: Key type .fscrypt registered Oct 29 04:52:32.060959 kernel: Key type fscrypt-provisioning registered Oct 29 04:52:32.060972 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 29 04:52:32.060985 kernel: ima: Allocated hash algorithm: sha1 Oct 29 04:52:32.061008 kernel: ima: No architecture policies found Oct 29 04:52:32.061021 kernel: clk: Disabling unused clocks Oct 29 04:52:32.061045 kernel: Freeing unused kernel image (initmem) memory: 47496K Oct 29 04:52:32.061058 kernel: Write protecting the kernel read-only data: 28672k Oct 29 04:52:32.061075 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 29 04:52:32.061110 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Oct 29 04:52:32.061122 kernel: Run /init as init process Oct 29 04:52:32.061134 kernel: with arguments: Oct 29 04:52:32.061146 kernel: /init Oct 29 04:52:32.061158 kernel: with environment: Oct 29 04:52:32.061170 kernel: HOME=/ Oct 29 04:52:32.061182 kernel: TERM=linux Oct 29 04:52:32.061194 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 29 04:52:32.061216 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 29 04:52:32.061239 systemd[1]: Detected virtualization kvm. Oct 29 04:52:32.061252 systemd[1]: Detected architecture x86-64. Oct 29 04:52:32.061297 systemd[1]: Running in initrd. Oct 29 04:52:32.061311 systemd[1]: No hostname configured, using default hostname. Oct 29 04:52:32.061324 systemd[1]: Hostname set to . Oct 29 04:52:32.061338 systemd[1]: Initializing machine ID from VM UUID. Oct 29 04:52:32.061352 systemd[1]: Queued start job for default target initrd.target. Oct 29 04:52:32.061371 systemd[1]: Started systemd-ask-password-console.path. Oct 29 04:52:32.061384 systemd[1]: Reached target cryptsetup.target. Oct 29 04:52:32.061398 systemd[1]: Reached target paths.target. Oct 29 04:52:32.061411 systemd[1]: Reached target slices.target. Oct 29 04:52:32.061425 systemd[1]: Reached target swap.target. Oct 29 04:52:32.061438 systemd[1]: Reached target timers.target. Oct 29 04:52:32.061453 systemd[1]: Listening on iscsid.socket. Oct 29 04:52:32.061470 systemd[1]: Listening on iscsiuio.socket. Oct 29 04:52:32.061484 systemd[1]: Listening on systemd-journald-audit.socket. Oct 29 04:52:32.061498 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 29 04:52:32.061512 systemd[1]: Listening on systemd-journald.socket. Oct 29 04:52:32.061526 systemd[1]: Listening on systemd-networkd.socket. Oct 29 04:52:32.061539 systemd[1]: Listening on systemd-udevd-control.socket. Oct 29 04:52:32.061559 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 29 04:52:32.061573 systemd[1]: Reached target sockets.target. Oct 29 04:52:32.061586 systemd[1]: Starting kmod-static-nodes.service... Oct 29 04:52:32.061605 systemd[1]: Finished network-cleanup.service. Oct 29 04:52:32.061619 systemd[1]: Starting systemd-fsck-usr.service... Oct 29 04:52:32.061632 systemd[1]: Starting systemd-journald.service... Oct 29 04:52:32.061659 systemd[1]: Starting systemd-modules-load.service... Oct 29 04:52:32.061673 systemd[1]: Starting systemd-resolved.service... Oct 29 04:52:32.061687 systemd[1]: Starting systemd-vconsole-setup.service... Oct 29 04:52:32.061700 systemd[1]: Finished kmod-static-nodes.service. Oct 29 04:52:32.061731 systemd-journald[200]: Journal started Oct 29 04:52:32.061845 systemd-journald[200]: Runtime Journal (/run/log/journal/5a3dcf57d1d345a3899abc73277112dd) is 4.7M, max 38.1M, 33.3M free. Oct 29 04:52:31.981791 systemd-modules-load[201]: Inserted module 'overlay' Oct 29 04:52:32.082256 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 29 04:52:32.082325 kernel: Bridge firewalling registered Oct 29 04:52:32.034782 systemd-resolved[202]: Positive Trust Anchors: Oct 29 04:52:32.089457 systemd[1]: Started systemd-resolved.service. Oct 29 04:52:32.089486 kernel: audit: type=1130 audit(1761713552.082:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:32.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:32.034803 systemd-resolved[202]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 04:52:32.097199 systemd[1]: Started systemd-journald.service. Oct 29 04:52:32.097227 kernel: audit: type=1130 audit(1761713552.089:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:32.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:32.038887 systemd-resolved[202]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 29 04:52:32.106436 kernel: audit: type=1130 audit(1761713552.097:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:32.106462 kernel: SCSI subsystem initialized Oct 29 04:52:32.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:32.045150 systemd-resolved[202]: Defaulting to hostname 'linux'. Oct 29 04:52:32.112534 kernel: audit: type=1130 audit(1761713552.106:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:32.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:32.068755 systemd-modules-load[201]: Inserted module 'br_netfilter' Oct 29 04:52:32.128652 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 29 04:52:32.128677 kernel: device-mapper: uevent: version 1.0.3 Oct 29 04:52:32.128701 kernel: audit: type=1130 audit(1761713552.112:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:32.128761 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 29 04:52:32.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:32.098300 systemd[1]: Finished systemd-fsck-usr.service. Oct 29 04:52:32.107377 systemd[1]: Finished systemd-vconsole-setup.service. Oct 29 04:52:32.113416 systemd[1]: Reached target nss-lookup.target. Oct 29 04:52:32.115254 systemd[1]: Starting dracut-cmdline-ask.service... Oct 29 04:52:32.128999 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 29 04:52:32.131283 systemd-modules-load[201]: Inserted module 'dm_multipath' Oct 29 04:52:32.139909 systemd[1]: Finished systemd-modules-load.service. Oct 29 04:52:32.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:32.143840 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 29 04:52:32.157774 kernel: audit: type=1130 audit(1761713552.142:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:32.157815 kernel: audit: type=1130 audit(1761713552.149:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:32.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:32.151486 systemd[1]: Starting systemd-sysctl.service... Oct 29 04:52:32.160938 systemd[1]: Finished dracut-cmdline-ask.service. Oct 29 04:52:32.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:32.162706 systemd[1]: Starting dracut-cmdline.service... Oct 29 04:52:32.170966 kernel: audit: type=1130 audit(1761713552.160:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:32.170145 systemd[1]: Finished systemd-sysctl.service. Oct 29 04:52:32.190489 kernel: audit: type=1130 audit(1761713552.169:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:32.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:32.190739 dracut-cmdline[223]: dracut-dracut-053 Oct 29 04:52:32.190739 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=201610a31b2ff0ec76573eccf3918f182ba51086e5a85b3aea8675643c4efef7 Oct 29 04:52:32.268852 kernel: Loading iSCSI transport class v2.0-870. Oct 29 04:52:32.291863 kernel: iscsi: registered transport (tcp) Oct 29 04:52:32.322231 kernel: iscsi: registered transport (qla4xxx) Oct 29 04:52:32.322370 kernel: QLogic iSCSI HBA Driver Oct 29 04:52:32.373741 systemd[1]: Finished dracut-cmdline.service. Oct 29 04:52:32.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:32.376024 systemd[1]: Starting dracut-pre-udev.service... Oct 29 04:52:32.436947 kernel: raid6: sse2x4 gen() 12692 MB/s Oct 29 04:52:32.454912 kernel: raid6: sse2x4 xor() 7288 MB/s Oct 29 04:52:32.472941 kernel: raid6: sse2x2 gen() 8979 MB/s Oct 29 04:52:32.490866 kernel: raid6: sse2x2 xor() 7760 MB/s Oct 29 04:52:32.508936 kernel: raid6: sse2x1 gen() 9471 MB/s Oct 29 04:52:32.527590 kernel: raid6: sse2x1 xor() 6886 MB/s Oct 29 04:52:32.527629 kernel: raid6: using algorithm sse2x4 gen() 12692 MB/s Oct 29 04:52:32.527648 kernel: raid6: .... xor() 7288 MB/s, rmw enabled Oct 29 04:52:32.528926 kernel: raid6: using ssse3x2 recovery algorithm Oct 29 04:52:32.546869 kernel: xor: automatically using best checksumming function avx Oct 29 04:52:32.666857 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 29 04:52:32.679832 systemd[1]: Finished dracut-pre-udev.service. Oct 29 04:52:32.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:32.680000 audit: BPF prog-id=7 op=LOAD Oct 29 04:52:32.680000 audit: BPF prog-id=8 op=LOAD Oct 29 04:52:32.681868 systemd[1]: Starting systemd-udevd.service... Oct 29 04:52:32.699131 systemd-udevd[400]: Using default interface naming scheme 'v252'. Oct 29 04:52:32.707244 systemd[1]: Started systemd-udevd.service. Oct 29 04:52:32.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:32.713309 systemd[1]: Starting dracut-pre-trigger.service... Oct 29 04:52:32.731760 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Oct 29 04:52:32.778285 systemd[1]: Finished dracut-pre-trigger.service. Oct 29 04:52:32.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:32.788666 systemd[1]: Starting systemd-udev-trigger.service... Oct 29 04:52:32.894662 systemd[1]: Finished systemd-udev-trigger.service. Oct 29 04:52:32.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:32.992901 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Oct 29 04:52:33.101346 kernel: ACPI: bus type USB registered Oct 29 04:52:33.101374 kernel: usbcore: registered new interface driver usbfs Oct 29 04:52:33.101391 kernel: usbcore: registered new interface driver hub Oct 29 04:52:33.101417 kernel: usbcore: registered new device driver usb Oct 29 04:52:33.101435 kernel: cryptd: max_cpu_qlen set to 1000 Oct 29 04:52:33.101458 kernel: AVX version of gcm_enc/dec engaged. Oct 29 04:52:33.101476 kernel: AES CTR mode by8 optimization enabled Oct 29 04:52:33.101492 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 29 04:52:33.101508 kernel: GPT:17805311 != 125829119 Oct 29 04:52:33.101524 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 29 04:52:33.101540 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Oct 29 04:52:33.101752 kernel: GPT:17805311 != 125829119 Oct 29 04:52:33.101776 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Oct 29 04:52:33.102010 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 29 04:52:33.102041 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 29 04:52:33.102235 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 04:52:33.102268 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Oct 29 04:52:33.102449 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Oct 29 04:52:33.102628 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Oct 29 04:52:33.102810 kernel: hub 1-0:1.0: USB hub found Oct 29 04:52:33.103081 kernel: hub 1-0:1.0: 4 ports detected Oct 29 04:52:33.103304 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 29 04:52:33.103515 kernel: hub 2-0:1.0: USB hub found Oct 29 04:52:33.103717 kernel: hub 2-0:1.0: 4 ports detected Oct 29 04:52:33.104882 kernel: libata version 3.00 loaded. Oct 29 04:52:33.151851 kernel: ahci 0000:00:1f.2: version 3.0 Oct 29 04:52:33.178884 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 29 04:52:33.178912 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (455) Oct 29 04:52:33.178930 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 29 04:52:33.179112 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 29 04:52:33.179313 kernel: scsi host0: ahci Oct 29 04:52:33.179522 kernel: scsi host1: ahci Oct 29 04:52:33.179719 kernel: scsi host2: ahci Oct 29 04:52:33.179938 kernel: scsi host3: ahci Oct 29 04:52:33.180134 kernel: scsi host4: ahci Oct 29 04:52:33.180345 kernel: scsi host5: ahci Oct 29 04:52:33.180533 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Oct 29 04:52:33.180554 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Oct 29 04:52:33.180571 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Oct 29 04:52:33.180588 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Oct 29 04:52:33.180611 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Oct 29 04:52:33.180628 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Oct 29 04:52:33.154979 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 29 04:52:33.251425 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 29 04:52:33.252320 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 29 04:52:33.258807 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 29 04:52:33.264104 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 29 04:52:33.266264 systemd[1]: Starting disk-uuid.service... Oct 29 04:52:33.276149 disk-uuid[528]: Primary Header is updated. Oct 29 04:52:33.276149 disk-uuid[528]: Secondary Entries is updated. Oct 29 04:52:33.276149 disk-uuid[528]: Secondary Header is updated. Oct 29 04:52:33.280860 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 04:52:33.296863 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 04:52:33.331862 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 29 04:52:33.473865 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 29 04:52:33.490140 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 29 04:52:33.490247 kernel: ata3: SATA link down (SStatus 0 SControl 300) Oct 29 04:52:33.491787 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 29 04:52:33.495351 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 29 04:52:33.495390 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 29 04:52:33.495429 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 29 04:52:33.518372 kernel: usbcore: registered new interface driver usbhid Oct 29 04:52:33.518457 kernel: usbhid: USB HID core driver Oct 29 04:52:33.528637 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Oct 29 04:52:33.528675 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Oct 29 04:52:34.293848 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 04:52:34.295396 disk-uuid[529]: The operation has completed successfully. Oct 29 04:52:34.353300 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 29 04:52:34.353469 systemd[1]: Finished disk-uuid.service. Oct 29 04:52:34.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:34.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:34.355472 systemd[1]: Starting verity-setup.service... Oct 29 04:52:34.382857 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Oct 29 04:52:34.433942 systemd[1]: Found device dev-mapper-usr.device. Oct 29 04:52:34.437925 systemd[1]: Mounting sysusr-usr.mount... Oct 29 04:52:34.440347 systemd[1]: Finished verity-setup.service. Oct 29 04:52:34.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:34.536878 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 29 04:52:34.537514 systemd[1]: Mounted sysusr-usr.mount. Oct 29 04:52:34.538406 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 29 04:52:34.539477 systemd[1]: Starting ignition-setup.service... Oct 29 04:52:34.542089 systemd[1]: Starting parse-ip-for-networkd.service... Oct 29 04:52:34.560088 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 04:52:34.560160 kernel: BTRFS info (device vda6): using free space tree Oct 29 04:52:34.560180 kernel: BTRFS info (device vda6): has skinny extents Oct 29 04:52:34.578829 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 29 04:52:34.586617 systemd[1]: Finished ignition-setup.service. Oct 29 04:52:34.588508 systemd[1]: Starting ignition-fetch-offline.service... Oct 29 04:52:34.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:34.704054 systemd[1]: Finished parse-ip-for-networkd.service. Oct 29 04:52:34.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:34.707000 audit: BPF prog-id=9 op=LOAD Oct 29 04:52:34.709148 systemd[1]: Starting systemd-networkd.service... Oct 29 04:52:34.746094 systemd-networkd[710]: lo: Link UP Oct 29 04:52:34.746109 systemd-networkd[710]: lo: Gained carrier Oct 29 04:52:34.747603 systemd-networkd[710]: Enumeration completed Oct 29 04:52:34.748387 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 04:52:34.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:34.749318 systemd[1]: Started systemd-networkd.service. Oct 29 04:52:34.750975 systemd-networkd[710]: eth0: Link UP Oct 29 04:52:34.750982 systemd-networkd[710]: eth0: Gained carrier Oct 29 04:52:34.765212 ignition[629]: Ignition 2.14.0 Oct 29 04:52:34.750995 systemd[1]: Reached target network.target. Oct 29 04:52:34.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:34.765236 ignition[629]: Stage: fetch-offline Oct 29 04:52:34.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:34.753206 systemd[1]: Starting iscsiuio.service... Oct 29 04:52:34.765372 ignition[629]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 29 04:52:34.770694 systemd[1]: Started iscsiuio.service. Oct 29 04:52:34.765419 ignition[629]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 29 04:52:34.771955 systemd[1]: Finished ignition-fetch-offline.service. Oct 29 04:52:34.767127 ignition[629]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 29 04:52:34.774565 systemd[1]: Starting ignition-fetch.service... Oct 29 04:52:34.767303 ignition[629]: parsed url from cmdline: "" Oct 29 04:52:34.782628 systemd[1]: Starting iscsid.service... Oct 29 04:52:34.767311 ignition[629]: no config URL provided Oct 29 04:52:34.767322 ignition[629]: reading system config file "/usr/lib/ignition/user.ign" Oct 29 04:52:34.767338 ignition[629]: no config at "/usr/lib/ignition/user.ign" Oct 29 04:52:34.767430 ignition[629]: failed to fetch config: resource requires networking Oct 29 04:52:34.789416 iscsid[720]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 29 04:52:34.789416 iscsid[720]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 29 04:52:34.789416 iscsid[720]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 29 04:52:34.789416 iscsid[720]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 29 04:52:34.789416 iscsid[720]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 29 04:52:34.789416 iscsid[720]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 29 04:52:34.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:34.767615 ignition[629]: Ignition finished successfully Oct 29 04:52:34.791009 systemd-networkd[710]: eth0: DHCPv4 address 10.230.24.246/30, gateway 10.230.24.245 acquired from 10.230.24.245 Oct 29 04:52:34.793281 systemd[1]: Started iscsid.service. Oct 29 04:52:34.795584 systemd[1]: Starting dracut-initqueue.service... Oct 29 04:52:34.806327 ignition[715]: Ignition 2.14.0 Oct 29 04:52:34.806355 ignition[715]: Stage: fetch Oct 29 04:52:34.806555 ignition[715]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 29 04:52:34.806591 ignition[715]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 29 04:52:34.809872 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 29 04:52:34.810016 ignition[715]: parsed url from cmdline: "" Oct 29 04:52:34.810024 ignition[715]: no config URL provided Oct 29 04:52:34.810034 ignition[715]: reading system config file "/usr/lib/ignition/user.ign" Oct 29 04:52:34.810050 ignition[715]: no config at "/usr/lib/ignition/user.ign" Oct 29 04:52:34.813497 ignition[715]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Oct 29 04:52:34.813546 ignition[715]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Oct 29 04:52:34.816044 systemd[1]: Finished dracut-initqueue.service. Oct 29 04:52:34.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:34.817485 systemd[1]: Reached target remote-fs-pre.target. Oct 29 04:52:34.818894 systemd[1]: Reached target remote-cryptsetup.target. Oct 29 04:52:34.818429 ignition[715]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Oct 29 04:52:34.821096 systemd[1]: Reached target remote-fs.target. Oct 29 04:52:34.826377 systemd[1]: Starting dracut-pre-mount.service... Oct 29 04:52:34.838080 ignition[715]: GET result: OK Oct 29 04:52:34.838875 ignition[715]: parsing config with SHA512: f8a4307138decaaea4125ba561a67fd4355c7e81d587f7a16e506375beef0e320f84033733cca4eeccfe34b4063027ad61d0e327ee2ff3b3c101e7b3b07bc9ea Oct 29 04:52:34.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:34.842378 systemd[1]: Finished dracut-pre-mount.service. Oct 29 04:52:34.851763 unknown[715]: fetched base config from "system" Oct 29 04:52:34.852644 unknown[715]: fetched base config from "system" Oct 29 04:52:34.853455 unknown[715]: fetched user config from "openstack" Oct 29 04:52:34.855030 ignition[715]: fetch: fetch complete Oct 29 04:52:34.855746 ignition[715]: fetch: fetch passed Oct 29 04:52:34.856547 ignition[715]: Ignition finished successfully Oct 29 04:52:34.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:34.859005 systemd[1]: Finished ignition-fetch.service. Oct 29 04:52:34.860869 systemd[1]: Starting ignition-kargs.service... Oct 29 04:52:34.874630 ignition[735]: Ignition 2.14.0 Oct 29 04:52:34.875657 ignition[735]: Stage: kargs Oct 29 04:52:34.876491 ignition[735]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 29 04:52:34.877473 ignition[735]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 29 04:52:34.878912 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 29 04:52:34.881540 ignition[735]: kargs: kargs passed Oct 29 04:52:34.882316 ignition[735]: Ignition finished successfully Oct 29 04:52:34.884156 systemd[1]: Finished ignition-kargs.service. Oct 29 04:52:34.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:34.886123 systemd[1]: Starting ignition-disks.service... Oct 29 04:52:34.897395 ignition[741]: Ignition 2.14.0 Oct 29 04:52:34.897417 ignition[741]: Stage: disks Oct 29 04:52:34.897584 ignition[741]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 29 04:52:34.897619 ignition[741]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 29 04:52:34.898881 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 29 04:52:34.900462 ignition[741]: disks: disks passed Oct 29 04:52:34.900529 ignition[741]: Ignition finished successfully Oct 29 04:52:34.903242 systemd[1]: Finished ignition-disks.service. Oct 29 04:52:34.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:34.904127 systemd[1]: Reached target initrd-root-device.target. Oct 29 04:52:34.905202 systemd[1]: Reached target local-fs-pre.target. Oct 29 04:52:34.906524 systemd[1]: Reached target local-fs.target. Oct 29 04:52:34.907859 systemd[1]: Reached target sysinit.target. Oct 29 04:52:34.909142 systemd[1]: Reached target basic.target. Oct 29 04:52:34.911603 systemd[1]: Starting systemd-fsck-root.service... Oct 29 04:52:34.931587 systemd-fsck[748]: ROOT: clean, 637/1628000 files, 124069/1617920 blocks Oct 29 04:52:34.941853 systemd[1]: Finished systemd-fsck-root.service. Oct 29 04:52:34.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:34.946009 systemd[1]: Mounting sysroot.mount... Oct 29 04:52:34.955870 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 29 04:52:34.956215 systemd[1]: Mounted sysroot.mount. Oct 29 04:52:34.957051 systemd[1]: Reached target initrd-root-fs.target. Oct 29 04:52:34.959765 systemd[1]: Mounting sysroot-usr.mount... Oct 29 04:52:34.961676 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 29 04:52:34.962747 systemd[1]: Starting flatcar-openstack-hostname.service... Oct 29 04:52:34.963575 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 29 04:52:34.963633 systemd[1]: Reached target ignition-diskful.target. Oct 29 04:52:34.970040 systemd[1]: Mounted sysroot-usr.mount. Oct 29 04:52:34.972001 systemd[1]: Starting initrd-setup-root.service... Oct 29 04:52:34.980413 initrd-setup-root[759]: cut: /sysroot/etc/passwd: No such file or directory Oct 29 04:52:34.996079 initrd-setup-root[767]: cut: /sysroot/etc/group: No such file or directory Oct 29 04:52:35.004984 initrd-setup-root[775]: cut: /sysroot/etc/shadow: No such file or directory Oct 29 04:52:35.017390 initrd-setup-root[784]: cut: /sysroot/etc/gshadow: No such file or directory Oct 29 04:52:35.080898 systemd[1]: Finished initrd-setup-root.service. Oct 29 04:52:35.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:35.082884 systemd[1]: Starting ignition-mount.service... Oct 29 04:52:35.084495 systemd[1]: Starting sysroot-boot.service... Oct 29 04:52:35.099065 bash[802]: umount: /sysroot/usr/share/oem: not mounted. Oct 29 04:52:35.114886 coreos-metadata[754]: Oct 29 04:52:35.114 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 29 04:52:35.121296 ignition[804]: INFO : Ignition 2.14.0 Oct 29 04:52:35.121296 ignition[804]: INFO : Stage: mount Oct 29 04:52:35.122905 ignition[804]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 29 04:52:35.122905 ignition[804]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 29 04:52:35.125202 ignition[804]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 29 04:52:35.125202 ignition[804]: INFO : mount: mount passed Oct 29 04:52:35.125202 ignition[804]: INFO : Ignition finished successfully Oct 29 04:52:35.126031 systemd[1]: Finished ignition-mount.service. Oct 29 04:52:35.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:35.131764 coreos-metadata[754]: Oct 29 04:52:35.131 INFO Fetch successful Oct 29 04:52:35.132888 coreos-metadata[754]: Oct 29 04:52:35.132 INFO wrote hostname srv-xtjva.gb1.brightbox.com to /sysroot/etc/hostname Oct 29 04:52:35.137079 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Oct 29 04:52:35.137257 systemd[1]: Finished flatcar-openstack-hostname.service. Oct 29 04:52:35.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:35.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:35.143240 systemd[1]: Finished sysroot-boot.service. Oct 29 04:52:35.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:35.460146 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 29 04:52:35.471848 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (811) Oct 29 04:52:35.476635 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 29 04:52:35.476687 kernel: BTRFS info (device vda6): using free space tree Oct 29 04:52:35.476705 kernel: BTRFS info (device vda6): has skinny extents Oct 29 04:52:35.483972 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 29 04:52:35.485778 systemd[1]: Starting ignition-files.service... Oct 29 04:52:35.507125 ignition[831]: INFO : Ignition 2.14.0 Oct 29 04:52:35.507125 ignition[831]: INFO : Stage: files Oct 29 04:52:35.508844 ignition[831]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 29 04:52:35.508844 ignition[831]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 29 04:52:35.508844 ignition[831]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 29 04:52:35.512312 ignition[831]: DEBUG : files: compiled without relabeling support, skipping Oct 29 04:52:35.514149 ignition[831]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 29 04:52:35.515207 ignition[831]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 29 04:52:35.521281 ignition[831]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 29 04:52:35.522688 ignition[831]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 29 04:52:35.524962 unknown[831]: wrote ssh authorized keys file for user: core Oct 29 04:52:35.526681 ignition[831]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 29 04:52:35.527897 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 29 04:52:35.529030 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 29 04:52:35.529030 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 29 04:52:35.531385 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Oct 29 04:52:35.705866 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 29 04:52:35.923571 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 29 04:52:35.923571 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 29 04:52:35.926107 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 29 04:52:35.926107 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 29 04:52:35.926107 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 29 04:52:35.926107 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 04:52:35.926107 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 04:52:35.926107 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 04:52:35.926107 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 04:52:35.926107 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 04:52:35.926107 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 04:52:35.926107 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 29 04:52:35.926107 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 29 04:52:35.926107 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 29 04:52:35.926107 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Oct 29 04:52:36.175613 systemd-networkd[710]: eth0: Gained IPv6LL Oct 29 04:52:36.258317 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 29 04:52:37.307060 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 29 04:52:37.307060 ignition[831]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Oct 29 04:52:37.307060 ignition[831]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Oct 29 04:52:37.307060 ignition[831]: INFO : files: op(d): [started] processing unit "containerd.service" Oct 29 04:52:37.315994 ignition[831]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 29 04:52:37.317407 ignition[831]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 29 04:52:37.317407 ignition[831]: INFO : files: op(d): [finished] processing unit "containerd.service" Oct 29 04:52:37.317407 ignition[831]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Oct 29 04:52:37.317407 ignition[831]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 04:52:37.317407 ignition[831]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 04:52:37.317407 ignition[831]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Oct 29 04:52:37.317407 ignition[831]: INFO : files: op(11): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 29 04:52:37.317407 ignition[831]: INFO : files: op(11): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 29 04:52:37.317407 ignition[831]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 29 04:52:37.340230 kernel: kauditd_printk_skb: 28 callbacks suppressed Oct 29 04:52:37.340289 kernel: audit: type=1130 audit(1761713557.327:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.327250 systemd[1]: Finished ignition-files.service. Oct 29 04:52:37.341409 ignition[831]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 29 04:52:37.341409 ignition[831]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 29 04:52:37.341409 ignition[831]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 29 04:52:37.341409 ignition[831]: INFO : files: files passed Oct 29 04:52:37.341409 ignition[831]: INFO : Ignition finished successfully Oct 29 04:52:37.352612 kernel: audit: type=1130 audit(1761713557.346:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.330358 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 29 04:52:37.363341 kernel: audit: type=1130 audit(1761713557.352:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.363395 kernel: audit: type=1131 audit(1761713557.352:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.336966 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 29 04:52:37.365173 initrd-setup-root-after-ignition[856]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 04:52:37.338368 systemd[1]: Starting ignition-quench.service... Oct 29 04:52:37.345575 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 29 04:52:37.347365 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 29 04:52:37.347497 systemd[1]: Finished ignition-quench.service. Oct 29 04:52:37.353429 systemd[1]: Reached target ignition-complete.target. Oct 29 04:52:37.365429 systemd[1]: Starting initrd-parse-etc.service... Oct 29 04:52:37.386071 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 29 04:52:37.386261 systemd[1]: Finished initrd-parse-etc.service. Oct 29 04:52:37.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.390151 systemd[1]: Reached target initrd-fs.target. Oct 29 04:52:37.400026 kernel: audit: type=1130 audit(1761713557.387:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.400061 kernel: audit: type=1131 audit(1761713557.389:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.399353 systemd[1]: Reached target initrd.target. Oct 29 04:52:37.400648 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 29 04:52:37.401841 systemd[1]: Starting dracut-pre-pivot.service... Oct 29 04:52:37.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.418727 systemd[1]: Finished dracut-pre-pivot.service. Oct 29 04:52:37.439345 kernel: audit: type=1130 audit(1761713557.418:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.439332 systemd[1]: Starting initrd-cleanup.service... Oct 29 04:52:37.454678 systemd[1]: Stopped target nss-lookup.target. Oct 29 04:52:37.456273 systemd[1]: Stopped target remote-cryptsetup.target. Oct 29 04:52:37.457877 systemd[1]: Stopped target timers.target. Oct 29 04:52:37.459339 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 29 04:52:37.460383 systemd[1]: Stopped dracut-pre-pivot.service. Oct 29 04:52:37.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.462181 systemd[1]: Stopped target initrd.target. Oct 29 04:52:37.467974 kernel: audit: type=1131 audit(1761713557.461:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.468632 systemd[1]: Stopped target basic.target. Oct 29 04:52:37.470216 systemd[1]: Stopped target ignition-complete.target. Oct 29 04:52:37.471816 systemd[1]: Stopped target ignition-diskful.target. Oct 29 04:52:37.473417 systemd[1]: Stopped target initrd-root-device.target. Oct 29 04:52:37.474979 systemd[1]: Stopped target remote-fs.target. Oct 29 04:52:37.476481 systemd[1]: Stopped target remote-fs-pre.target. Oct 29 04:52:37.478004 systemd[1]: Stopped target sysinit.target. Oct 29 04:52:37.479490 systemd[1]: Stopped target local-fs.target. Oct 29 04:52:37.481001 systemd[1]: Stopped target local-fs-pre.target. Oct 29 04:52:37.482525 systemd[1]: Stopped target swap.target. Oct 29 04:52:37.483918 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 29 04:52:37.484921 systemd[1]: Stopped dracut-pre-mount.service. Oct 29 04:52:37.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.486626 systemd[1]: Stopped target cryptsetup.target. Oct 29 04:52:37.492124 kernel: audit: type=1131 audit(1761713557.485:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.492879 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 29 04:52:37.493877 systemd[1]: Stopped dracut-initqueue.service. Oct 29 04:52:37.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.495629 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 29 04:52:37.501320 kernel: audit: type=1131 audit(1761713557.494:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.495877 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 29 04:52:37.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.502410 systemd[1]: ignition-files.service: Deactivated successfully. Oct 29 04:52:37.502621 systemd[1]: Stopped ignition-files.service. Oct 29 04:52:37.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.505182 systemd[1]: Stopping ignition-mount.service... Oct 29 04:52:37.510535 systemd[1]: Stopping iscsiuio.service... Oct 29 04:52:37.513491 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 29 04:52:37.514647 systemd[1]: Stopped kmod-static-nodes.service. Oct 29 04:52:37.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.517926 systemd[1]: Stopping sysroot-boot.service... Oct 29 04:52:37.519353 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 29 04:52:37.520565 systemd[1]: Stopped systemd-udev-trigger.service. Oct 29 04:52:37.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.522462 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 29 04:52:37.523609 systemd[1]: Stopped dracut-pre-trigger.service. Oct 29 04:52:37.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.527428 ignition[869]: INFO : Ignition 2.14.0 Oct 29 04:52:37.527428 ignition[869]: INFO : Stage: umount Oct 29 04:52:37.529106 ignition[869]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 29 04:52:37.529106 ignition[869]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 29 04:52:37.529106 ignition[869]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 29 04:52:37.534125 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 29 04:52:37.535112 systemd[1]: Stopped iscsiuio.service. Oct 29 04:52:37.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.538294 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 29 04:52:37.539238 systemd[1]: Finished initrd-cleanup.service. Oct 29 04:52:37.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.542749 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 29 04:52:37.544262 ignition[869]: INFO : umount: umount passed Oct 29 04:52:37.544262 ignition[869]: INFO : Ignition finished successfully Oct 29 04:52:37.546459 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 29 04:52:37.547544 systemd[1]: Stopped ignition-mount.service. Oct 29 04:52:37.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.549334 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 29 04:52:37.549407 systemd[1]: Stopped ignition-disks.service. Oct 29 04:52:37.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.550916 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 29 04:52:37.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.551002 systemd[1]: Stopped ignition-kargs.service. Oct 29 04:52:37.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.552200 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 29 04:52:37.552259 systemd[1]: Stopped ignition-fetch.service. Oct 29 04:52:37.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.553592 systemd[1]: Stopped target network.target. Oct 29 04:52:37.555471 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 29 04:52:37.555543 systemd[1]: Stopped ignition-fetch-offline.service. Oct 29 04:52:37.556930 systemd[1]: Stopped target paths.target. Oct 29 04:52:37.558184 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 29 04:52:37.562877 systemd[1]: Stopped systemd-ask-password-console.path. Oct 29 04:52:37.564538 systemd[1]: Stopped target slices.target. Oct 29 04:52:37.565854 systemd[1]: Stopped target sockets.target. Oct 29 04:52:37.567209 systemd[1]: iscsid.socket: Deactivated successfully. Oct 29 04:52:37.567731 systemd[1]: Closed iscsid.socket. Oct 29 04:52:37.568404 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 29 04:52:37.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.568472 systemd[1]: Closed iscsiuio.socket. Oct 29 04:52:37.569672 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 29 04:52:37.569727 systemd[1]: Stopped ignition-setup.service. Oct 29 04:52:37.571157 systemd[1]: Stopping systemd-networkd.service... Oct 29 04:52:37.573404 systemd[1]: Stopping systemd-resolved.service... Oct 29 04:52:37.575878 systemd-networkd[710]: eth0: DHCPv6 lease lost Oct 29 04:52:37.580421 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 29 04:52:37.581448 systemd[1]: Stopped systemd-resolved.service. Oct 29 04:52:37.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.584126 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 29 04:52:37.584285 systemd[1]: Stopped systemd-networkd.service. Oct 29 04:52:37.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.585000 audit: BPF prog-id=6 op=UNLOAD Oct 29 04:52:37.586873 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 29 04:52:37.586942 systemd[1]: Closed systemd-networkd.socket. Oct 29 04:52:37.587000 audit: BPF prog-id=9 op=UNLOAD Oct 29 04:52:37.589962 systemd[1]: Stopping network-cleanup.service... Oct 29 04:52:37.592216 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 29 04:52:37.592298 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 29 04:52:37.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.593872 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 29 04:52:37.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.593947 systemd[1]: Stopped systemd-sysctl.service. Oct 29 04:52:37.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.595585 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 29 04:52:37.595669 systemd[1]: Stopped systemd-modules-load.service. Oct 29 04:52:37.603473 systemd[1]: Stopping systemd-udevd.service... Oct 29 04:52:37.606198 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 29 04:52:37.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.609301 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 29 04:52:37.609551 systemd[1]: Stopped systemd-udevd.service. Oct 29 04:52:37.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.610621 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 29 04:52:37.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.610689 systemd[1]: Closed systemd-udevd-control.socket. Oct 29 04:52:37.611535 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 29 04:52:37.611594 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 29 04:52:37.612493 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 29 04:52:37.612578 systemd[1]: Stopped dracut-pre-udev.service. Oct 29 04:52:37.613875 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 29 04:52:37.613940 systemd[1]: Stopped dracut-cmdline.service. Oct 29 04:52:37.615203 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 29 04:52:37.615270 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 29 04:52:37.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.617553 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 29 04:52:37.618333 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 04:52:37.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.618419 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 29 04:52:37.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.641553 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 29 04:52:37.641731 systemd[1]: Stopped network-cleanup.service. Oct 29 04:52:37.642939 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 29 04:52:37.643071 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 29 04:52:37.691920 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 29 04:52:37.692126 systemd[1]: Stopped sysroot-boot.service. Oct 29 04:52:37.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.693653 systemd[1]: Reached target initrd-switch-root.target. Oct 29 04:52:37.694910 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 29 04:52:37.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:37.694981 systemd[1]: Stopped initrd-setup-root.service. Oct 29 04:52:37.697223 systemd[1]: Starting initrd-switch-root.service... Oct 29 04:52:37.705000 audit: BPF prog-id=8 op=UNLOAD Oct 29 04:52:37.705000 audit: BPF prog-id=7 op=UNLOAD Oct 29 04:52:37.706222 systemd[1]: Switching root. Oct 29 04:52:37.710000 audit: BPF prog-id=5 op=UNLOAD Oct 29 04:52:37.710000 audit: BPF prog-id=4 op=UNLOAD Oct 29 04:52:37.710000 audit: BPF prog-id=3 op=UNLOAD Oct 29 04:52:37.726390 iscsid[720]: iscsid shutting down. Oct 29 04:52:37.727180 systemd-journald[200]: Received SIGTERM from PID 1 (systemd). Oct 29 04:52:37.727284 systemd-journald[200]: Journal stopped Oct 29 04:52:41.807405 kernel: SELinux: Class mctp_socket not defined in policy. Oct 29 04:52:41.808229 kernel: SELinux: Class anon_inode not defined in policy. Oct 29 04:52:41.808264 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 29 04:52:41.808296 kernel: SELinux: policy capability network_peer_controls=1 Oct 29 04:52:41.808696 kernel: SELinux: policy capability open_perms=1 Oct 29 04:52:41.808722 kernel: SELinux: policy capability extended_socket_class=1 Oct 29 04:52:41.808742 kernel: SELinux: policy capability always_check_network=0 Oct 29 04:52:41.808766 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 29 04:52:41.809473 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 29 04:52:41.809498 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 29 04:52:41.809527 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 29 04:52:41.809557 systemd[1]: Successfully loaded SELinux policy in 61.816ms. Oct 29 04:52:41.809614 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.448ms. Oct 29 04:52:41.809650 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 29 04:52:41.809674 systemd[1]: Detected virtualization kvm. Oct 29 04:52:41.809695 systemd[1]: Detected architecture x86-64. Oct 29 04:52:41.809726 systemd[1]: Detected first boot. Oct 29 04:52:41.809760 systemd[1]: Hostname set to . Oct 29 04:52:41.809790 systemd[1]: Initializing machine ID from VM UUID. Oct 29 04:52:41.809813 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Oct 29 04:52:41.809852 systemd[1]: Populated /etc with preset unit settings. Oct 29 04:52:41.809877 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 29 04:52:41.809909 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 29 04:52:41.809968 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 29 04:52:41.810007 systemd[1]: Queued start job for default target multi-user.target. Oct 29 04:52:41.810031 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 29 04:52:41.810052 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 29 04:52:41.810079 systemd[1]: Created slice system-addon\x2drun.slice. Oct 29 04:52:41.810101 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Oct 29 04:52:41.810122 systemd[1]: Created slice system-getty.slice. Oct 29 04:52:41.810143 systemd[1]: Created slice system-modprobe.slice. Oct 29 04:52:41.810177 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 29 04:52:41.810200 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 29 04:52:41.810221 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 29 04:52:41.810241 systemd[1]: Created slice user.slice. Oct 29 04:52:41.810268 systemd[1]: Started systemd-ask-password-console.path. Oct 29 04:52:41.810295 systemd[1]: Started systemd-ask-password-wall.path. Oct 29 04:52:41.810321 systemd[1]: Set up automount boot.automount. Oct 29 04:52:41.810341 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 29 04:52:41.810363 systemd[1]: Reached target integritysetup.target. Oct 29 04:52:41.810396 systemd[1]: Reached target remote-cryptsetup.target. Oct 29 04:52:41.810420 systemd[1]: Reached target remote-fs.target. Oct 29 04:52:41.810441 systemd[1]: Reached target slices.target. Oct 29 04:52:41.810462 systemd[1]: Reached target swap.target. Oct 29 04:52:41.810497 systemd[1]: Reached target torcx.target. Oct 29 04:52:41.810519 systemd[1]: Reached target veritysetup.target. Oct 29 04:52:41.810546 systemd[1]: Listening on systemd-coredump.socket. Oct 29 04:52:41.810577 systemd[1]: Listening on systemd-initctl.socket. Oct 29 04:52:41.810600 systemd[1]: Listening on systemd-journald-audit.socket. Oct 29 04:52:41.810621 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 29 04:52:41.810641 systemd[1]: Listening on systemd-journald.socket. Oct 29 04:52:41.810661 systemd[1]: Listening on systemd-networkd.socket. Oct 29 04:52:41.810681 systemd[1]: Listening on systemd-udevd-control.socket. Oct 29 04:52:41.810701 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 29 04:52:41.810721 systemd[1]: Listening on systemd-userdbd.socket. Oct 29 04:52:41.810742 systemd[1]: Mounting dev-hugepages.mount... Oct 29 04:52:41.810762 systemd[1]: Mounting dev-mqueue.mount... Oct 29 04:52:41.810794 systemd[1]: Mounting media.mount... Oct 29 04:52:41.810828 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 04:52:41.810852 systemd[1]: Mounting sys-kernel-debug.mount... Oct 29 04:52:41.810874 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 29 04:52:41.810896 systemd[1]: Mounting tmp.mount... Oct 29 04:52:41.810918 systemd[1]: Starting flatcar-tmpfiles.service... Oct 29 04:52:41.810946 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 29 04:52:41.810978 systemd[1]: Starting kmod-static-nodes.service... Oct 29 04:52:41.811014 systemd[1]: Starting modprobe@configfs.service... Oct 29 04:52:41.811048 systemd[1]: Starting modprobe@dm_mod.service... Oct 29 04:52:41.811071 systemd[1]: Starting modprobe@drm.service... Oct 29 04:52:41.811091 systemd[1]: Starting modprobe@efi_pstore.service... Oct 29 04:52:41.811112 systemd[1]: Starting modprobe@fuse.service... Oct 29 04:52:41.811132 systemd[1]: Starting modprobe@loop.service... Oct 29 04:52:41.811152 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 29 04:52:41.811173 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 29 04:52:41.811194 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Oct 29 04:52:41.811225 systemd[1]: Starting systemd-journald.service... Oct 29 04:52:41.811278 kernel: fuse: init (API version 7.34) Oct 29 04:52:41.811302 systemd[1]: Starting systemd-modules-load.service... Oct 29 04:52:41.811323 systemd[1]: Starting systemd-network-generator.service... Oct 29 04:52:41.811348 systemd[1]: Starting systemd-remount-fs.service... Oct 29 04:52:41.811368 kernel: loop: module loaded Oct 29 04:52:41.811404 systemd[1]: Starting systemd-udev-trigger.service... Oct 29 04:52:41.811428 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 04:52:41.811449 systemd[1]: Mounted dev-hugepages.mount. Oct 29 04:52:41.811470 systemd[1]: Mounted dev-mqueue.mount. Oct 29 04:52:41.811503 systemd[1]: Mounted media.mount. Oct 29 04:52:41.811525 systemd[1]: Mounted sys-kernel-debug.mount. Oct 29 04:52:41.811559 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 29 04:52:41.811590 systemd-journald[1013]: Journal started Oct 29 04:52:41.812264 systemd-journald[1013]: Runtime Journal (/run/log/journal/5a3dcf57d1d345a3899abc73277112dd) is 4.7M, max 38.1M, 33.3M free. Oct 29 04:52:41.603000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Oct 29 04:52:41.799000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 29 04:52:41.799000 audit[1013]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fffaa8aa0a0 a2=4000 a3=7fffaa8aa13c items=0 ppid=1 pid=1013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:52:41.799000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 29 04:52:41.817869 systemd[1]: Started systemd-journald.service. Oct 29 04:52:41.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:41.816530 systemd[1]: Mounted tmp.mount. Oct 29 04:52:41.819928 systemd[1]: Finished kmod-static-nodes.service. Oct 29 04:52:41.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:41.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:41.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:41.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:41.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:41.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:41.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:41.821140 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 29 04:52:41.821410 systemd[1]: Finished modprobe@configfs.service. Oct 29 04:52:41.822660 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 04:52:41.822919 systemd[1]: Finished modprobe@dm_mod.service. Oct 29 04:52:41.825549 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 04:52:41.825798 systemd[1]: Finished modprobe@drm.service. Oct 29 04:52:41.827034 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 04:52:41.827249 systemd[1]: Finished modprobe@efi_pstore.service. Oct 29 04:52:41.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:41.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:41.828499 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 29 04:52:41.828715 systemd[1]: Finished modprobe@fuse.service. Oct 29 04:52:41.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:41.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:41.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:41.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:41.829738 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 04:52:41.830084 systemd[1]: Finished modprobe@loop.service. Oct 29 04:52:41.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:41.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:41.831175 systemd[1]: Finished systemd-modules-load.service. Oct 29 04:52:41.832304 systemd[1]: Finished systemd-network-generator.service. Oct 29 04:52:41.833378 systemd[1]: Finished systemd-remount-fs.service. Oct 29 04:52:41.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:41.835448 systemd[1]: Reached target network-pre.target. Oct 29 04:52:41.840431 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 29 04:52:41.845029 systemd[1]: Mounting sys-kernel-config.mount... Oct 29 04:52:41.845750 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 29 04:52:41.853715 systemd[1]: Starting systemd-hwdb-update.service... Oct 29 04:52:41.856541 systemd[1]: Starting systemd-journal-flush.service... Oct 29 04:52:41.859054 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 04:52:41.861001 systemd[1]: Starting systemd-random-seed.service... Oct 29 04:52:41.865034 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 29 04:52:41.868496 systemd[1]: Starting systemd-sysctl.service... Oct 29 04:52:41.874468 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 29 04:52:41.876807 systemd[1]: Mounted sys-kernel-config.mount. Oct 29 04:52:41.882727 systemd-journald[1013]: Time spent on flushing to /var/log/journal/5a3dcf57d1d345a3899abc73277112dd is 92.587ms for 1220 entries. Oct 29 04:52:41.882727 systemd-journald[1013]: System Journal (/var/log/journal/5a3dcf57d1d345a3899abc73277112dd) is 8.0M, max 584.8M, 576.8M free. Oct 29 04:52:41.999254 systemd-journald[1013]: Received client request to flush runtime journal. Oct 29 04:52:41.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:41.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:41.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:41.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:41.895689 systemd[1]: Finished systemd-random-seed.service. Oct 29 04:52:41.896589 systemd[1]: Reached target first-boot-complete.target. Oct 29 04:52:41.897768 systemd[1]: Finished flatcar-tmpfiles.service. Oct 29 04:52:41.900726 systemd[1]: Starting systemd-sysusers.service... Oct 29 04:52:41.926877 systemd[1]: Finished systemd-sysctl.service. Oct 29 04:52:41.953943 systemd[1]: Finished systemd-sysusers.service. Oct 29 04:52:41.956613 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 29 04:52:42.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:42.002110 systemd[1]: Finished systemd-journal-flush.service. Oct 29 04:52:42.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:42.008373 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 29 04:52:42.060881 systemd[1]: Finished systemd-udev-trigger.service. Oct 29 04:52:42.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:42.063682 systemd[1]: Starting systemd-udev-settle.service... Oct 29 04:52:42.076533 udevadm[1067]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 29 04:52:42.559403 systemd[1]: Finished systemd-hwdb-update.service. Oct 29 04:52:42.566945 kernel: kauditd_printk_skb: 76 callbacks suppressed Oct 29 04:52:42.567061 kernel: audit: type=1130 audit(1761713562.559:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:42.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:42.562118 systemd[1]: Starting systemd-udevd.service... Oct 29 04:52:42.593553 systemd-udevd[1069]: Using default interface naming scheme 'v252'. Oct 29 04:52:42.630726 systemd[1]: Started systemd-udevd.service. Oct 29 04:52:42.638982 kernel: audit: type=1130 audit(1761713562.630:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:42.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:42.634094 systemd[1]: Starting systemd-networkd.service... Oct 29 04:52:42.647273 systemd[1]: Starting systemd-userdbd.service... Oct 29 04:52:42.702668 systemd[1]: Started systemd-userdbd.service. Oct 29 04:52:42.708960 kernel: audit: type=1130 audit(1761713562.702:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:42.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:42.743136 systemd[1]: Found device dev-ttyS0.device. Oct 29 04:52:42.843126 systemd-networkd[1070]: lo: Link UP Oct 29 04:52:42.843152 systemd-networkd[1070]: lo: Gained carrier Oct 29 04:52:42.843954 systemd-networkd[1070]: Enumeration completed Oct 29 04:52:42.844117 systemd[1]: Started systemd-networkd.service. Oct 29 04:52:42.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:42.845060 systemd-networkd[1070]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 04:52:42.850853 kernel: audit: type=1130 audit(1761713562.843:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:42.852261 systemd-networkd[1070]: eth0: Link UP Oct 29 04:52:42.852274 systemd-networkd[1070]: eth0: Gained carrier Oct 29 04:52:42.860229 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 29 04:52:42.874047 systemd-networkd[1070]: eth0: DHCPv4 address 10.230.24.246/30, gateway 10.230.24.245 acquired from 10.230.24.245 Oct 29 04:52:42.879023 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Oct 29 04:52:42.889849 kernel: ACPI: button: Power Button [PWRF] Oct 29 04:52:42.902866 kernel: mousedev: PS/2 mouse device common for all mice Oct 29 04:52:42.958000 audit[1083]: AVC avc: denied { confidentiality } for pid=1083 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 29 04:52:42.986274 kernel: audit: type=1400 audit(1761713562.958:120): avc: denied { confidentiality } for pid=1083 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 29 04:52:42.958000 audit[1083]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55f727f45170 a1=338ec a2=7f9d0b84fbc5 a3=5 items=110 ppid=1069 pid=1083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:52:43.019842 kernel: audit: type=1300 audit(1761713562.958:120): arch=c000003e syscall=175 success=yes exit=0 a0=55f727f45170 a1=338ec a2=7f9d0b84fbc5 a3=5 items=110 ppid=1069 pid=1083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:52:42.958000 audit: CWD cwd="/" Oct 29 04:52:43.022844 kernel: audit: type=1307 audit(1761713562.958:120): cwd="/" Oct 29 04:52:42.958000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=1 name=(null) inode=14676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=2 name=(null) inode=14676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=3 name=(null) inode=14677 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=4 name=(null) inode=14676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=5 name=(null) inode=14678 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=6 name=(null) inode=14676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:43.026844 kernel: audit: type=1302 audit(1761713562.958:120): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:43.026934 kernel: audit: type=1302 audit(1761713562.958:120): item=1 name=(null) inode=14676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:43.026984 kernel: audit: type=1302 audit(1761713562.958:120): item=2 name=(null) inode=14676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=7 name=(null) inode=14679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=8 name=(null) inode=14679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=9 name=(null) inode=14680 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=10 name=(null) inode=14679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=11 name=(null) inode=14681 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=12 name=(null) inode=14679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=13 name=(null) inode=14682 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=14 name=(null) inode=14679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=15 name=(null) inode=14683 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=16 name=(null) inode=14679 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=17 name=(null) inode=14684 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=18 name=(null) inode=14676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=19 name=(null) inode=14685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=20 name=(null) inode=14685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=21 name=(null) inode=14686 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=22 name=(null) inode=14685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=23 name=(null) inode=14687 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=24 name=(null) inode=14685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=25 name=(null) inode=14688 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=26 name=(null) inode=14685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=27 name=(null) inode=14689 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=28 name=(null) inode=14685 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=29 name=(null) inode=14690 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=30 name=(null) inode=14676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=31 name=(null) inode=14691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=32 name=(null) inode=14691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=33 name=(null) inode=14692 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=34 name=(null) inode=14691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=35 name=(null) inode=14693 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=36 name=(null) inode=14691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=37 name=(null) inode=14694 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=38 name=(null) inode=14691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=39 name=(null) inode=14695 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=40 name=(null) inode=14691 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=41 name=(null) inode=14696 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=42 name=(null) inode=14676 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=43 name=(null) inode=14697 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=44 name=(null) inode=14697 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=45 name=(null) inode=14698 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=46 name=(null) inode=14697 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=47 name=(null) inode=14699 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=48 name=(null) inode=14697 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=49 name=(null) inode=14700 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=50 name=(null) inode=14697 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=51 name=(null) inode=14701 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=52 name=(null) inode=14697 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=53 name=(null) inode=14702 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=55 name=(null) inode=14703 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=56 name=(null) inode=14703 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=57 name=(null) inode=14704 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=58 name=(null) inode=14703 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=59 name=(null) inode=14705 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=60 name=(null) inode=14703 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=61 name=(null) inode=14706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=62 name=(null) inode=14706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=63 name=(null) inode=14707 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=64 name=(null) inode=14706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=65 name=(null) inode=14708 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=66 name=(null) inode=14706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=67 name=(null) inode=14709 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=68 name=(null) inode=14706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=69 name=(null) inode=14710 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=70 name=(null) inode=14706 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=71 name=(null) inode=14711 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=72 name=(null) inode=14703 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=73 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=74 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=75 name=(null) inode=14713 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=76 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=77 name=(null) inode=14714 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=78 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=79 name=(null) inode=14715 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:43.034907 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Oct 29 04:52:42.958000 audit: PATH item=80 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=81 name=(null) inode=14716 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=82 name=(null) inode=14712 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=83 name=(null) inode=14717 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=84 name=(null) inode=14703 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=85 name=(null) inode=14718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=86 name=(null) inode=14718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=87 name=(null) inode=14719 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=88 name=(null) inode=14718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=89 name=(null) inode=14720 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=90 name=(null) inode=14718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=91 name=(null) inode=14721 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=92 name=(null) inode=14718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=93 name=(null) inode=14722 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:43.035905 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 29 04:52:43.049130 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 29 04:52:43.049410 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 29 04:52:42.958000 audit: PATH item=94 name=(null) inode=14718 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=95 name=(null) inode=14723 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=96 name=(null) inode=14703 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=97 name=(null) inode=14724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=98 name=(null) inode=14724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=99 name=(null) inode=14725 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=100 name=(null) inode=14724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=101 name=(null) inode=14726 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=102 name=(null) inode=14724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=103 name=(null) inode=14727 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=104 name=(null) inode=14724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=105 name=(null) inode=14728 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=106 name=(null) inode=14724 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=107 name=(null) inode=14729 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PATH item=109 name=(null) inode=14730 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:52:42.958000 audit: PROCTITLE proctitle="(udev-worker)" Oct 29 04:52:43.189475 systemd[1]: Finished systemd-udev-settle.service. Oct 29 04:52:43.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:43.192170 systemd[1]: Starting lvm2-activation-early.service... Oct 29 04:52:43.217507 lvm[1099]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 29 04:52:43.255286 systemd[1]: Finished lvm2-activation-early.service. Oct 29 04:52:43.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:43.256226 systemd[1]: Reached target cryptsetup.target. Oct 29 04:52:43.258865 systemd[1]: Starting lvm2-activation.service... Oct 29 04:52:43.265624 lvm[1101]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 29 04:52:43.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:43.297477 systemd[1]: Finished lvm2-activation.service. Oct 29 04:52:43.298372 systemd[1]: Reached target local-fs-pre.target. Oct 29 04:52:43.299077 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 29 04:52:43.299111 systemd[1]: Reached target local-fs.target. Oct 29 04:52:43.299730 systemd[1]: Reached target machines.target. Oct 29 04:52:43.303494 systemd[1]: Starting ldconfig.service... Oct 29 04:52:43.305167 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 04:52:43.305386 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 04:52:43.307191 systemd[1]: Starting systemd-boot-update.service... Oct 29 04:52:43.309474 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 29 04:52:43.312382 systemd[1]: Starting systemd-machine-id-commit.service... Oct 29 04:52:43.315200 systemd[1]: Starting systemd-sysext.service... Oct 29 04:52:43.326893 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1104 (bootctl) Oct 29 04:52:43.328955 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 29 04:52:43.344789 systemd[1]: Unmounting usr-share-oem.mount... Oct 29 04:52:43.347696 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 29 04:52:43.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:43.355319 systemd[1]: usr-share-oem.mount: Deactivated successfully. Oct 29 04:52:43.355648 systemd[1]: Unmounted usr-share-oem.mount. Oct 29 04:52:43.499851 kernel: loop0: detected capacity change from 0 to 224512 Oct 29 04:52:43.510598 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 29 04:52:43.511592 systemd[1]: Finished systemd-machine-id-commit.service. Oct 29 04:52:43.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:43.545729 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 29 04:52:43.569387 kernel: loop1: detected capacity change from 0 to 224512 Oct 29 04:52:43.586411 (sd-sysext)[1120]: Using extensions 'kubernetes'. Oct 29 04:52:43.589085 (sd-sysext)[1120]: Merged extensions into '/usr'. Oct 29 04:52:43.598454 systemd-fsck[1116]: fsck.fat 4.2 (2021-01-31) Oct 29 04:52:43.598454 systemd-fsck[1116]: /dev/vda1: 790 files, 120772/258078 clusters Oct 29 04:52:43.610979 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 29 04:52:43.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:43.613795 systemd[1]: Mounting boot.mount... Oct 29 04:52:43.644548 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 04:52:43.647277 systemd[1]: Mounting usr-share-oem.mount... Oct 29 04:52:43.652322 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 29 04:52:43.655505 systemd[1]: Starting modprobe@dm_mod.service... Oct 29 04:52:43.660318 systemd[1]: Starting modprobe@efi_pstore.service... Oct 29 04:52:43.663671 systemd[1]: Starting modprobe@loop.service... Oct 29 04:52:43.664492 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 04:52:43.664714 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 04:52:43.664959 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 04:52:43.678655 systemd[1]: Mounted boot.mount. Oct 29 04:52:43.679672 systemd[1]: Mounted usr-share-oem.mount. Oct 29 04:52:43.680919 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 04:52:43.681165 systemd[1]: Finished modprobe@dm_mod.service. Oct 29 04:52:43.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:43.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:43.685099 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 04:52:43.685328 systemd[1]: Finished modprobe@efi_pstore.service. Oct 29 04:52:43.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:43.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:43.687461 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 04:52:43.688324 systemd[1]: Finished modprobe@loop.service. Oct 29 04:52:43.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:43.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:43.691142 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 04:52:43.691306 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 29 04:52:43.696307 systemd[1]: Finished systemd-sysext.service. Oct 29 04:52:43.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:43.704402 systemd[1]: Starting ensure-sysext.service... Oct 29 04:52:43.712155 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 29 04:52:43.722915 systemd[1]: Reloading. Oct 29 04:52:43.735857 systemd-tmpfiles[1139]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 29 04:52:43.741162 systemd-tmpfiles[1139]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 29 04:52:43.749360 systemd-tmpfiles[1139]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 29 04:52:43.882397 /usr/lib/systemd/system-generators/torcx-generator[1161]: time="2025-10-29T04:52:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 29 04:52:43.884902 /usr/lib/systemd/system-generators/torcx-generator[1161]: time="2025-10-29T04:52:43Z" level=info msg="torcx already run" Oct 29 04:52:43.955384 ldconfig[1103]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 29 04:52:44.038287 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 29 04:52:44.038319 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 29 04:52:44.069093 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 29 04:52:44.111252 systemd-networkd[1070]: eth0: Gained IPv6LL Oct 29 04:52:44.172978 systemd[1]: Finished ldconfig.service. Oct 29 04:52:44.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.174554 systemd[1]: Finished systemd-boot-update.service. Oct 29 04:52:44.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.177331 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 29 04:52:44.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.181594 systemd[1]: Starting audit-rules.service... Oct 29 04:52:44.184634 systemd[1]: Starting clean-ca-certificates.service... Oct 29 04:52:44.187787 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 29 04:52:44.191505 systemd[1]: Starting systemd-resolved.service... Oct 29 04:52:44.195580 systemd[1]: Starting systemd-timesyncd.service... Oct 29 04:52:44.205576 systemd[1]: Starting systemd-update-utmp.service... Oct 29 04:52:44.210205 systemd[1]: Finished clean-ca-certificates.service. Oct 29 04:52:44.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.224000 audit[1227]: SYSTEM_BOOT pid=1227 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.219427 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 04:52:44.223249 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 04:52:44.223617 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 29 04:52:44.225774 systemd[1]: Starting modprobe@dm_mod.service... Oct 29 04:52:44.229585 systemd[1]: Starting modprobe@efi_pstore.service... Oct 29 04:52:44.236320 systemd[1]: Starting modprobe@loop.service... Oct 29 04:52:44.237625 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 04:52:44.237976 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 04:52:44.238268 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 04:52:44.238447 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 04:52:44.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.242084 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 04:52:44.242343 systemd[1]: Finished modprobe@dm_mod.service. Oct 29 04:52:44.243764 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 04:52:44.244027 systemd[1]: Finished modprobe@loop.service. Oct 29 04:52:44.252336 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 04:52:44.252751 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 29 04:52:44.255918 systemd[1]: Starting modprobe@dm_mod.service... Oct 29 04:52:44.259132 systemd[1]: Starting modprobe@loop.service... Oct 29 04:52:44.259984 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 04:52:44.260308 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 04:52:44.260651 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 04:52:44.260921 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 04:52:44.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.265430 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 04:52:44.265711 systemd[1]: Finished modprobe@efi_pstore.service. Oct 29 04:52:44.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.273255 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 04:52:44.273475 systemd[1]: Finished modprobe@dm_mod.service. Oct 29 04:52:44.280530 systemd[1]: Finished systemd-update-utmp.service. Oct 29 04:52:44.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.283054 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 04:52:44.283461 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 29 04:52:44.286227 systemd[1]: Starting modprobe@dm_mod.service... Oct 29 04:52:44.293416 systemd[1]: Starting modprobe@drm.service... Oct 29 04:52:44.299328 systemd[1]: Starting modprobe@efi_pstore.service... Oct 29 04:52:44.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.300936 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 04:52:44.301164 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 04:52:44.303297 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 29 04:52:44.305021 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 04:52:44.305275 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 29 04:52:44.307453 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 04:52:44.307725 systemd[1]: Finished modprobe@loop.service. Oct 29 04:52:44.311587 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 04:52:44.311815 systemd[1]: Finished modprobe@dm_mod.service. Oct 29 04:52:44.316718 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 04:52:44.317056 systemd[1]: Finished modprobe@drm.service. Oct 29 04:52:44.318620 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 04:52:44.318896 systemd[1]: Finished modprobe@efi_pstore.service. Oct 29 04:52:44.320416 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 04:52:44.320589 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 29 04:52:44.323334 systemd[1]: Finished ensure-sysext.service. Oct 29 04:52:44.325033 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 29 04:52:44.331321 systemd[1]: Starting systemd-update-done.service... Oct 29 04:52:44.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.349202 systemd[1]: Finished systemd-update-done.service. Oct 29 04:52:44.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:52:44.354637 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 29 04:52:44.394000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 29 04:52:44.394000 audit[1260]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffb1fe20b0 a2=420 a3=0 items=0 ppid=1215 pid=1260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:52:44.394000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 29 04:52:44.395465 augenrules[1260]: No rules Oct 29 04:52:44.396314 systemd[1]: Finished audit-rules.service. Oct 29 04:52:44.431898 systemd-resolved[1218]: Positive Trust Anchors: Oct 29 04:52:44.432970 systemd-resolved[1218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 04:52:44.433134 systemd-resolved[1218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 29 04:52:44.433973 systemd[1]: Started systemd-timesyncd.service. Oct 29 04:52:44.435123 systemd[1]: Reached target time-set.target. Oct 29 04:52:44.444129 systemd-resolved[1218]: Using system hostname 'srv-xtjva.gb1.brightbox.com'. Oct 29 04:52:44.446969 systemd[1]: Started systemd-resolved.service. Oct 29 04:52:44.447916 systemd[1]: Reached target network.target. Oct 29 04:52:44.448583 systemd[1]: Reached target network-online.target. Oct 29 04:52:44.449335 systemd[1]: Reached target nss-lookup.target. Oct 29 04:52:44.464741 systemd[1]: Reached target sysinit.target. Oct 29 04:52:44.465592 systemd[1]: Started motdgen.path. Oct 29 04:52:44.466290 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 29 04:52:44.467361 systemd[1]: Started logrotate.timer. Oct 29 04:52:44.468155 systemd[1]: Started mdadm.timer. Oct 29 04:52:44.468717 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 29 04:52:44.469440 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 29 04:52:44.469513 systemd[1]: Reached target paths.target. Oct 29 04:52:44.470150 systemd[1]: Reached target timers.target. Oct 29 04:52:44.471246 systemd[1]: Listening on dbus.socket. Oct 29 04:52:44.474029 systemd[1]: Starting docker.socket... Oct 29 04:52:44.476761 systemd[1]: Listening on sshd.socket. Oct 29 04:52:44.477504 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 04:52:44.477959 systemd[1]: Listening on docker.socket. Oct 29 04:52:44.478673 systemd[1]: Reached target sockets.target. Oct 29 04:52:44.479302 systemd[1]: Reached target basic.target. Oct 29 04:52:44.480174 systemd[1]: System is tainted: cgroupsv1 Oct 29 04:52:44.480257 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 29 04:52:44.480301 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 29 04:52:44.482050 systemd[1]: Starting containerd.service... Oct 29 04:52:44.485472 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Oct 29 04:52:44.488312 systemd[1]: Starting dbus.service... Oct 29 04:52:44.491789 systemd[1]: Starting enable-oem-cloudinit.service... Oct 29 04:52:44.499346 systemd[1]: Starting extend-filesystems.service... Oct 29 04:52:44.500254 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 29 04:52:44.503090 systemd[1]: Starting kubelet.service... Oct 29 04:52:44.507427 systemd[1]: Starting motdgen.service... Oct 29 04:52:44.512634 systemd[1]: Starting prepare-helm.service... Oct 29 04:52:44.516465 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 29 04:52:44.533441 jq[1273]: false Oct 29 04:52:44.524027 systemd[1]: Starting sshd-keygen.service... Oct 29 04:52:44.534529 systemd[1]: Starting systemd-logind.service... Oct 29 04:52:44.542130 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 04:52:44.542389 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 29 04:52:44.544767 systemd[1]: Starting update-engine.service... Oct 29 04:52:44.556731 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 29 04:52:44.563886 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 29 04:52:44.564318 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 29 04:52:44.573043 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 29 04:52:44.574011 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 29 04:52:44.605089 tar[1304]: linux-amd64/LICENSE Oct 29 04:52:44.605751 jq[1301]: true Oct 29 04:52:44.625465 dbus-daemon[1272]: [system] SELinux support is enabled Oct 29 04:52:44.626123 tar[1304]: linux-amd64/helm Oct 29 04:52:44.626657 systemd[1]: Started dbus.service. Oct 29 04:52:44.627783 extend-filesystems[1274]: Found loop1 Oct 29 04:52:44.630440 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 29 04:52:44.630478 systemd[1]: Reached target system-config.target. Oct 29 04:52:44.631243 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 29 04:52:44.631285 systemd[1]: Reached target user-config.target. Oct 29 04:52:44.632885 extend-filesystems[1274]: Found vda Oct 29 04:52:44.635362 dbus-daemon[1272]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1070 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 29 04:52:44.636929 extend-filesystems[1274]: Found vda1 Oct 29 04:52:44.638564 extend-filesystems[1274]: Found vda2 Oct 29 04:52:44.638564 extend-filesystems[1274]: Found vda3 Oct 29 04:52:44.638564 extend-filesystems[1274]: Found usr Oct 29 04:52:44.638564 extend-filesystems[1274]: Found vda4 Oct 29 04:52:44.638564 extend-filesystems[1274]: Found vda6 Oct 29 04:52:44.659272 extend-filesystems[1274]: Found vda7 Oct 29 04:52:44.659272 extend-filesystems[1274]: Found vda9 Oct 29 04:52:44.659272 extend-filesystems[1274]: Checking size of /dev/vda9 Oct 29 04:52:44.641379 systemd[1]: motdgen.service: Deactivated successfully. Oct 29 04:52:44.684725 jq[1309]: true Oct 29 04:52:44.641714 systemd[1]: Finished motdgen.service. Oct 29 04:52:44.664271 systemd[1]: Starting systemd-hostnamed.service... Oct 29 04:52:44.725994 extend-filesystems[1274]: Resized partition /dev/vda9 Oct 29 04:52:44.740673 extend-filesystems[1332]: resize2fs 1.46.5 (30-Dec-2021) Oct 29 04:52:44.745447 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Oct 29 04:52:44.782199 update_engine[1294]: I1029 04:52:44.781574 1294 main.cc:92] Flatcar Update Engine starting Oct 29 04:52:44.793682 update_engine[1294]: I1029 04:52:44.787647 1294 update_check_scheduler.cc:74] Next update check in 7m15s Oct 29 04:52:44.787635 systemd[1]: Started update-engine.service. Oct 29 04:52:44.791189 systemd[1]: Started locksmithd.service. Oct 29 04:52:44.823445 bash[1337]: Updated "/home/core/.ssh/authorized_keys" Oct 29 04:52:44.824466 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 29 04:52:44.880505 env[1306]: time="2025-10-29T04:52:44.880307721Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 29 04:52:44.907630 systemd-logind[1290]: Watching system buttons on /dev/input/event2 (Power Button) Oct 29 04:52:44.910940 systemd-logind[1290]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 29 04:52:44.913299 systemd-logind[1290]: New seat seat0. Oct 29 04:52:44.921495 systemd[1]: Started systemd-logind.service. Oct 29 04:52:44.929536 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Oct 29 04:52:44.950289 extend-filesystems[1332]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 29 04:52:44.950289 extend-filesystems[1332]: old_desc_blocks = 1, new_desc_blocks = 8 Oct 29 04:52:44.950289 extend-filesystems[1332]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Oct 29 04:52:44.955258 extend-filesystems[1274]: Resized filesystem in /dev/vda9 Oct 29 04:52:44.952335 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 29 04:52:44.952716 systemd[1]: Finished extend-filesystems.service. Oct 29 04:52:44.972891 env[1306]: time="2025-10-29T04:52:44.972757183Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 29 04:52:44.979492 env[1306]: time="2025-10-29T04:52:44.979450405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 29 04:52:44.982737 env[1306]: time="2025-10-29T04:52:44.982686538Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 29 04:52:44.982914 env[1306]: time="2025-10-29T04:52:44.982883232Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 29 04:52:44.983377 env[1306]: time="2025-10-29T04:52:44.983342314Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 29 04:52:44.983522 env[1306]: time="2025-10-29T04:52:44.983491899Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 29 04:52:44.983669 env[1306]: time="2025-10-29T04:52:44.983638365Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 29 04:52:44.983805 env[1306]: time="2025-10-29T04:52:44.983778421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 29 04:52:44.984217 env[1306]: time="2025-10-29T04:52:44.984187696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 29 04:52:44.985057 env[1306]: time="2025-10-29T04:52:44.985026920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 29 04:52:44.986140 env[1306]: time="2025-10-29T04:52:44.986102830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 29 04:52:44.986288 env[1306]: time="2025-10-29T04:52:44.986245961Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 29 04:52:44.986518 env[1306]: time="2025-10-29T04:52:44.986471540Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 29 04:52:44.987249 env[1306]: time="2025-10-29T04:52:44.987218730Z" level=info msg="metadata content store policy set" policy=shared Oct 29 04:52:44.998406 env[1306]: time="2025-10-29T04:52:44.996994641Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 29 04:52:44.998406 env[1306]: time="2025-10-29T04:52:44.997073303Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 29 04:52:44.998406 env[1306]: time="2025-10-29T04:52:44.997099062Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 29 04:52:44.998406 env[1306]: time="2025-10-29T04:52:44.997260415Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 29 04:52:44.998406 env[1306]: time="2025-10-29T04:52:44.997306636Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 29 04:52:44.998406 env[1306]: time="2025-10-29T04:52:44.997341860Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 29 04:52:44.998406 env[1306]: time="2025-10-29T04:52:44.997362559Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 29 04:52:44.998406 env[1306]: time="2025-10-29T04:52:44.997402211Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 29 04:52:44.998406 env[1306]: time="2025-10-29T04:52:44.997422964Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 29 04:52:44.998406 env[1306]: time="2025-10-29T04:52:44.997469748Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 29 04:52:44.998406 env[1306]: time="2025-10-29T04:52:44.997496670Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 29 04:52:44.998406 env[1306]: time="2025-10-29T04:52:44.997555776Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 29 04:52:44.998406 env[1306]: time="2025-10-29T04:52:44.997780669Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 29 04:52:44.998406 env[1306]: time="2025-10-29T04:52:44.998020843Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 29 04:52:44.999705 env[1306]: time="2025-10-29T04:52:44.999540498Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 29 04:52:44.999705 env[1306]: time="2025-10-29T04:52:44.999634700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 29 04:52:44.999705 env[1306]: time="2025-10-29T04:52:44.999662028Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 29 04:52:45.000105 env[1306]: time="2025-10-29T04:52:45.000065043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 29 04:52:45.000240 env[1306]: time="2025-10-29T04:52:45.000213532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 29 04:52:45.000377 env[1306]: time="2025-10-29T04:52:45.000349603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 29 04:52:45.000516 env[1306]: time="2025-10-29T04:52:45.000489415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 29 04:52:45.000688 env[1306]: time="2025-10-29T04:52:45.000660109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 29 04:52:45.000883 env[1306]: time="2025-10-29T04:52:45.000815508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 29 04:52:45.001025 env[1306]: time="2025-10-29T04:52:45.000996391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 29 04:52:45.001174 env[1306]: time="2025-10-29T04:52:45.001147119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 29 04:52:45.001331 env[1306]: time="2025-10-29T04:52:45.001303864Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 29 04:52:45.001761 env[1306]: time="2025-10-29T04:52:45.001732568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 29 04:52:45.003937 env[1306]: time="2025-10-29T04:52:45.003903406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 29 04:52:45.004093 env[1306]: time="2025-10-29T04:52:45.004061924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 29 04:52:45.004293 env[1306]: time="2025-10-29T04:52:45.004244470Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 29 04:52:45.004431 env[1306]: time="2025-10-29T04:52:45.004398526Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 29 04:52:45.004608 env[1306]: time="2025-10-29T04:52:45.004580354Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 29 04:52:45.004794 env[1306]: time="2025-10-29T04:52:45.004744332Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 29 04:52:45.005033 env[1306]: time="2025-10-29T04:52:45.005004935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 29 04:52:45.005555 env[1306]: time="2025-10-29T04:52:45.005446816Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 29 04:52:45.007786 env[1306]: time="2025-10-29T04:52:45.005761324Z" level=info msg="Connect containerd service" Oct 29 04:52:45.007786 env[1306]: time="2025-10-29T04:52:45.005891232Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 29 04:52:45.008654 env[1306]: time="2025-10-29T04:52:45.008613798Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 04:52:45.012403 dbus-daemon[1272]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 29 04:52:45.012814 systemd[1]: Started systemd-hostnamed.service. Oct 29 04:52:45.013256 dbus-daemon[1272]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1319 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 29 04:52:45.014552 env[1306]: time="2025-10-29T04:52:45.014518708Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 29 04:52:45.017484 env[1306]: time="2025-10-29T04:52:45.017436166Z" level=info msg="Start subscribing containerd event" Oct 29 04:52:45.017681 env[1306]: time="2025-10-29T04:52:45.017643080Z" level=info msg="Start recovering state" Oct 29 04:52:45.018041 systemd[1]: Starting polkit.service... Oct 29 04:52:45.018332 env[1306]: time="2025-10-29T04:52:45.018303881Z" level=info msg="Start event monitor" Oct 29 04:52:45.018486 env[1306]: time="2025-10-29T04:52:45.018456307Z" level=info msg="Start snapshots syncer" Oct 29 04:52:45.018632 env[1306]: time="2025-10-29T04:52:45.018602151Z" level=info msg="Start cni network conf syncer for default" Oct 29 04:52:45.019102 env[1306]: time="2025-10-29T04:52:45.019073050Z" level=info msg="Start streaming server" Oct 29 04:52:45.020666 env[1306]: time="2025-10-29T04:52:45.020627496Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 29 04:52:45.021281 systemd[1]: Started containerd.service. Oct 29 04:52:45.022629 env[1306]: time="2025-10-29T04:52:45.022600338Z" level=info msg="containerd successfully booted in 0.143333s" Oct 29 04:52:45.047937 polkitd[1347]: Started polkitd version 121 Oct 29 04:52:45.072738 polkitd[1347]: Loading rules from directory /etc/polkit-1/rules.d Oct 29 04:52:45.073086 polkitd[1347]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 29 04:52:45.075277 polkitd[1347]: Finished loading, compiling and executing 2 rules Oct 29 04:52:45.075785 dbus-daemon[1272]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 29 04:52:45.076151 systemd[1]: Started polkit.service. Oct 29 04:52:45.081976 polkitd[1347]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 29 04:52:45.098886 systemd-hostnamed[1319]: Hostname set to (static) Oct 29 04:52:46.662735 systemd-resolved[1218]: Clock change detected. Flushing caches. Oct 29 04:52:46.663108 systemd-timesyncd[1221]: Contacted time server 185.177.149.33:123 (0.flatcar.pool.ntp.org). Oct 29 04:52:46.663440 systemd-timesyncd[1221]: Initial clock synchronization to Wed 2025-10-29 04:52:46.660927 UTC. Oct 29 04:52:46.692508 locksmithd[1338]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 29 04:52:46.712358 systemd-networkd[1070]: eth0: Ignoring DHCPv6 address 2a02:1348:179:863d:24:19ff:fee6:18f6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:863d:24:19ff:fee6:18f6/64 assigned by NDisc. Oct 29 04:52:46.712399 systemd-networkd[1070]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Oct 29 04:52:46.820527 tar[1304]: linux-amd64/README.md Oct 29 04:52:46.834981 systemd[1]: Finished prepare-helm.service. Oct 29 04:52:46.880665 sshd_keygen[1295]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 29 04:52:46.911186 systemd[1]: Finished sshd-keygen.service. Oct 29 04:52:46.915102 systemd[1]: Starting issuegen.service... Oct 29 04:52:46.925726 systemd[1]: issuegen.service: Deactivated successfully. Oct 29 04:52:46.926064 systemd[1]: Finished issuegen.service. Oct 29 04:52:46.929408 systemd[1]: Starting systemd-user-sessions.service... Oct 29 04:52:46.940823 systemd[1]: Finished systemd-user-sessions.service. Oct 29 04:52:46.943883 systemd[1]: Started getty@tty1.service. Oct 29 04:52:46.947835 systemd[1]: Started serial-getty@ttyS0.service. Oct 29 04:52:46.949042 systemd[1]: Reached target getty.target. Oct 29 04:52:47.395008 systemd[1]: Started kubelet.service. Oct 29 04:52:48.073547 kubelet[1384]: E1029 04:52:48.073470 1384 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 04:52:48.076317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 04:52:48.076658 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 04:52:52.703968 coreos-metadata[1271]: Oct 29 04:52:52.703 WARN failed to locate config-drive, using the metadata service API instead Oct 29 04:52:52.758038 coreos-metadata[1271]: Oct 29 04:52:52.757 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Oct 29 04:52:52.782899 coreos-metadata[1271]: Oct 29 04:52:52.782 INFO Fetch successful Oct 29 04:52:52.783249 coreos-metadata[1271]: Oct 29 04:52:52.783 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 29 04:52:52.812513 coreos-metadata[1271]: Oct 29 04:52:52.812 INFO Fetch successful Oct 29 04:52:52.826746 unknown[1271]: wrote ssh authorized keys file for user: core Oct 29 04:52:52.837654 update-ssh-keys[1394]: Updated "/home/core/.ssh/authorized_keys" Oct 29 04:52:52.838850 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Oct 29 04:52:52.839335 systemd[1]: Reached target multi-user.target. Oct 29 04:52:52.842626 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 29 04:52:52.854100 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 29 04:52:52.854494 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 29 04:52:52.855067 systemd[1]: Startup finished in 7.377s (kernel) + 13.996s (userspace) = 21.373s. Oct 29 04:52:55.226350 systemd[1]: Created slice system-sshd.slice. Oct 29 04:52:55.228819 systemd[1]: Started sshd@0-10.230.24.246:22-147.75.109.163:44600.service. Oct 29 04:52:56.138692 sshd[1400]: Accepted publickey for core from 147.75.109.163 port 44600 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 04:52:56.142686 sshd[1400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 04:52:56.159502 systemd[1]: Created slice user-500.slice. Oct 29 04:52:56.161140 systemd[1]: Starting user-runtime-dir@500.service... Oct 29 04:52:56.166740 systemd-logind[1290]: New session 1 of user core. Oct 29 04:52:56.178261 systemd[1]: Finished user-runtime-dir@500.service. Oct 29 04:52:56.180652 systemd[1]: Starting user@500.service... Oct 29 04:52:56.189309 (systemd)[1405]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 29 04:52:56.295406 systemd[1405]: Queued start job for default target default.target. Oct 29 04:52:56.295778 systemd[1405]: Reached target paths.target. Oct 29 04:52:56.295806 systemd[1405]: Reached target sockets.target. Oct 29 04:52:56.295827 systemd[1405]: Reached target timers.target. Oct 29 04:52:56.295847 systemd[1405]: Reached target basic.target. Oct 29 04:52:56.296028 systemd[1]: Started user@500.service. Oct 29 04:52:56.297556 systemd[1]: Started session-1.scope. Oct 29 04:52:56.298034 systemd[1405]: Reached target default.target. Oct 29 04:52:56.298455 systemd[1405]: Startup finished in 99ms. Oct 29 04:52:56.928302 systemd[1]: Started sshd@1-10.230.24.246:22-147.75.109.163:44602.service. Oct 29 04:52:57.832234 sshd[1414]: Accepted publickey for core from 147.75.109.163 port 44602 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 04:52:57.835179 sshd[1414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 04:52:57.842986 systemd-logind[1290]: New session 2 of user core. Oct 29 04:52:57.844126 systemd[1]: Started session-2.scope. Oct 29 04:52:58.160594 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 29 04:52:58.160896 systemd[1]: Stopped kubelet.service. Oct 29 04:52:58.163606 systemd[1]: Starting kubelet.service... Oct 29 04:52:58.345265 systemd[1]: Started kubelet.service. Oct 29 04:52:58.447546 kubelet[1427]: E1029 04:52:58.447354 1427 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 04:52:58.451304 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 04:52:58.451619 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 04:52:58.461668 sshd[1414]: pam_unix(sshd:session): session closed for user core Oct 29 04:52:58.465049 systemd[1]: sshd@1-10.230.24.246:22-147.75.109.163:44602.service: Deactivated successfully. Oct 29 04:52:58.466686 systemd[1]: session-2.scope: Deactivated successfully. Oct 29 04:52:58.466709 systemd-logind[1290]: Session 2 logged out. Waiting for processes to exit. Oct 29 04:52:58.468423 systemd-logind[1290]: Removed session 2. Oct 29 04:52:58.610218 systemd[1]: Started sshd@2-10.230.24.246:22-147.75.109.163:44614.service. Oct 29 04:52:59.530899 sshd[1436]: Accepted publickey for core from 147.75.109.163 port 44614 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 04:52:59.533268 sshd[1436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 04:52:59.542267 systemd[1]: Started session-3.scope. Oct 29 04:52:59.542762 systemd-logind[1290]: New session 3 of user core. Oct 29 04:53:00.154113 sshd[1436]: pam_unix(sshd:session): session closed for user core Oct 29 04:53:00.157654 systemd[1]: sshd@2-10.230.24.246:22-147.75.109.163:44614.service: Deactivated successfully. Oct 29 04:53:00.159559 systemd[1]: session-3.scope: Deactivated successfully. Oct 29 04:53:00.160097 systemd-logind[1290]: Session 3 logged out. Waiting for processes to exit. Oct 29 04:53:00.161422 systemd-logind[1290]: Removed session 3. Oct 29 04:53:00.299427 systemd[1]: Started sshd@3-10.230.24.246:22-147.75.109.163:53462.service. Oct 29 04:53:01.200668 sshd[1443]: Accepted publickey for core from 147.75.109.163 port 53462 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 04:53:01.202849 sshd[1443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 04:53:01.210281 systemd-logind[1290]: New session 4 of user core. Oct 29 04:53:01.211659 systemd[1]: Started session-4.scope. Oct 29 04:53:01.829082 sshd[1443]: pam_unix(sshd:session): session closed for user core Oct 29 04:53:01.834078 systemd[1]: sshd@3-10.230.24.246:22-147.75.109.163:53462.service: Deactivated successfully. Oct 29 04:53:01.835711 systemd[1]: session-4.scope: Deactivated successfully. Oct 29 04:53:01.836589 systemd-logind[1290]: Session 4 logged out. Waiting for processes to exit. Oct 29 04:53:01.837859 systemd-logind[1290]: Removed session 4. Oct 29 04:53:01.974946 systemd[1]: Started sshd@4-10.230.24.246:22-147.75.109.163:53476.service. Oct 29 04:53:02.880386 sshd[1450]: Accepted publickey for core from 147.75.109.163 port 53476 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 04:53:02.882762 sshd[1450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 04:53:02.890041 systemd-logind[1290]: New session 5 of user core. Oct 29 04:53:02.891510 systemd[1]: Started session-5.scope. Oct 29 04:53:03.419393 sudo[1454]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 29 04:53:03.420460 sudo[1454]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 29 04:53:03.436856 dbus-daemon[1272]: \xd0M\'EV: received setenforce notice (enforcing=639699488) Oct 29 04:53:03.439745 sudo[1454]: pam_unix(sudo:session): session closed for user root Oct 29 04:53:03.590019 sshd[1450]: pam_unix(sshd:session): session closed for user core Oct 29 04:53:03.595482 systemd-logind[1290]: Session 5 logged out. Waiting for processes to exit. Oct 29 04:53:03.596821 systemd[1]: sshd@4-10.230.24.246:22-147.75.109.163:53476.service: Deactivated successfully. Oct 29 04:53:03.598016 systemd[1]: session-5.scope: Deactivated successfully. Oct 29 04:53:03.599591 systemd-logind[1290]: Removed session 5. Oct 29 04:53:03.738980 systemd[1]: Started sshd@5-10.230.24.246:22-147.75.109.163:53484.service. Oct 29 04:53:04.656853 sshd[1458]: Accepted publickey for core from 147.75.109.163 port 53484 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 04:53:04.659246 sshd[1458]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 04:53:04.667782 systemd[1]: Started session-6.scope. Oct 29 04:53:04.668527 systemd-logind[1290]: New session 6 of user core. Oct 29 04:53:05.142551 sudo[1463]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 29 04:53:05.143711 sudo[1463]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 29 04:53:05.149047 sudo[1463]: pam_unix(sudo:session): session closed for user root Oct 29 04:53:05.156770 sudo[1462]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 29 04:53:05.157169 sudo[1462]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 29 04:53:05.172594 systemd[1]: Stopping audit-rules.service... Oct 29 04:53:05.173000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 29 04:53:05.182138 kernel: kauditd_printk_skb: 150 callbacks suppressed Oct 29 04:53:05.182795 kernel: audit: type=1305 audit(1761713585.173:161): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 29 04:53:05.182870 kernel: audit: type=1300 audit(1761713585.173:161): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcad30fcb0 a2=420 a3=0 items=0 ppid=1 pid=1466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:05.173000 audit[1466]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcad30fcb0 a2=420 a3=0 items=0 ppid=1 pid=1466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:05.185229 auditctl[1466]: No rules Oct 29 04:53:05.189745 systemd[1]: audit-rules.service: Deactivated successfully. Oct 29 04:53:05.190150 systemd[1]: Stopped audit-rules.service. Oct 29 04:53:05.194873 systemd[1]: Starting audit-rules.service... Oct 29 04:53:05.173000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 29 04:53:05.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:05.203985 kernel: audit: type=1327 audit(1761713585.173:161): proctitle=2F7362696E2F617564697463746C002D44 Oct 29 04:53:05.204077 kernel: audit: type=1131 audit(1761713585.188:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:05.228421 augenrules[1484]: No rules Oct 29 04:53:05.229797 systemd[1]: Finished audit-rules.service. Oct 29 04:53:05.240632 kernel: audit: type=1130 audit(1761713585.228:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:05.240709 kernel: audit: type=1106 audit(1761713585.230:164): pid=1462 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 29 04:53:05.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:05.230000 audit[1462]: USER_END pid=1462 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 29 04:53:05.231738 sudo[1462]: pam_unix(sudo:session): session closed for user root Oct 29 04:53:05.230000 audit[1462]: CRED_DISP pid=1462 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 29 04:53:05.248486 kernel: audit: type=1104 audit(1761713585.230:165): pid=1462 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 29 04:53:05.383849 sshd[1458]: pam_unix(sshd:session): session closed for user core Oct 29 04:53:05.384000 audit[1458]: USER_END pid=1458 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:53:05.389182 systemd[1]: sshd@5-10.230.24.246:22-147.75.109.163:53484.service: Deactivated successfully. Oct 29 04:53:05.390490 systemd[1]: session-6.scope: Deactivated successfully. Oct 29 04:53:05.395406 kernel: audit: type=1106 audit(1761713585.384:166): pid=1458 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:53:05.392667 systemd-logind[1290]: Session 6 logged out. Waiting for processes to exit. Oct 29 04:53:05.394205 systemd-logind[1290]: Removed session 6. Oct 29 04:53:05.385000 audit[1458]: CRED_DISP pid=1458 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:53:05.402401 kernel: audit: type=1104 audit(1761713585.385:167): pid=1458 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:53:05.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.230.24.246:22-147.75.109.163:53484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:05.409498 kernel: audit: type=1131 audit(1761713585.385:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.230.24.246:22-147.75.109.163:53484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:05.531008 systemd[1]: Started sshd@6-10.230.24.246:22-147.75.109.163:53498.service. Oct 29 04:53:05.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.230.24.246:22-147.75.109.163:53498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:06.426000 audit[1491]: USER_ACCT pid=1491 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:53:06.428677 sshd[1491]: Accepted publickey for core from 147.75.109.163 port 53498 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 04:53:06.428000 audit[1491]: CRED_ACQ pid=1491 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:53:06.428000 audit[1491]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd66994450 a2=3 a3=0 items=0 ppid=1 pid=1491 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:06.428000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 04:53:06.431290 sshd[1491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 04:53:06.440232 systemd-logind[1290]: New session 7 of user core. Oct 29 04:53:06.440482 systemd[1]: Started session-7.scope. Oct 29 04:53:06.449000 audit[1491]: USER_START pid=1491 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:53:06.452000 audit[1494]: CRED_ACQ pid=1494 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:53:06.906000 audit[1495]: USER_ACCT pid=1495 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 29 04:53:06.908114 sudo[1495]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 29 04:53:06.907000 audit[1495]: CRED_REFR pid=1495 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 29 04:53:06.909235 sudo[1495]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 29 04:53:06.911000 audit[1495]: USER_START pid=1495 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 29 04:53:06.964785 systemd[1]: Starting docker.service... Oct 29 04:53:07.042413 env[1505]: time="2025-10-29T04:53:07.042284396Z" level=info msg="Starting up" Oct 29 04:53:07.046193 env[1505]: time="2025-10-29T04:53:07.046160413Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 29 04:53:07.046342 env[1505]: time="2025-10-29T04:53:07.046311838Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 29 04:53:07.046524 env[1505]: time="2025-10-29T04:53:07.046489987Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 29 04:53:07.046670 env[1505]: time="2025-10-29T04:53:07.046640833Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 29 04:53:07.052182 env[1505]: time="2025-10-29T04:53:07.052148256Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 29 04:53:07.052324 env[1505]: time="2025-10-29T04:53:07.052295350Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 29 04:53:07.052502 env[1505]: time="2025-10-29T04:53:07.052457961Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 29 04:53:07.052634 env[1505]: time="2025-10-29T04:53:07.052606195Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 29 04:53:07.108899 env[1505]: time="2025-10-29T04:53:07.108838135Z" level=warning msg="Your kernel does not support cgroup blkio weight" Oct 29 04:53:07.109222 env[1505]: time="2025-10-29T04:53:07.109193062Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Oct 29 04:53:07.109729 env[1505]: time="2025-10-29T04:53:07.109689681Z" level=info msg="Loading containers: start." Oct 29 04:53:07.207000 audit[1537]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.207000 audit[1537]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffdc06d2a50 a2=0 a3=7ffdc06d2a3c items=0 ppid=1505 pid=1537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.207000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Oct 29 04:53:07.211000 audit[1539]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1539 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.211000 audit[1539]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdfd1e9020 a2=0 a3=7ffdfd1e900c items=0 ppid=1505 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.211000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Oct 29 04:53:07.214000 audit[1541]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.214000 audit[1541]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd5f54ef10 a2=0 a3=7ffd5f54eefc items=0 ppid=1505 pid=1541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.214000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Oct 29 04:53:07.217000 audit[1543]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.217000 audit[1543]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd56b9e460 a2=0 a3=7ffd56b9e44c items=0 ppid=1505 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.217000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Oct 29 04:53:07.221000 audit[1545]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.221000 audit[1545]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff08a8c0e0 a2=0 a3=7fff08a8c0cc items=0 ppid=1505 pid=1545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.221000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Oct 29 04:53:07.243000 audit[1550]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.243000 audit[1550]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe6301c210 a2=0 a3=7ffe6301c1fc items=0 ppid=1505 pid=1550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.243000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Oct 29 04:53:07.251000 audit[1552]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.251000 audit[1552]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe10567280 a2=0 a3=7ffe1056726c items=0 ppid=1505 pid=1552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.251000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Oct 29 04:53:07.254000 audit[1554]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1554 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.254000 audit[1554]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe845dc920 a2=0 a3=7ffe845dc90c items=0 ppid=1505 pid=1554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.254000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Oct 29 04:53:07.258000 audit[1556]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.258000 audit[1556]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffcb8dcfd60 a2=0 a3=7ffcb8dcfd4c items=0 ppid=1505 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.258000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 29 04:53:07.282000 audit[1560]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.282000 audit[1560]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffea2a31d60 a2=0 a3=7ffea2a31d4c items=0 ppid=1505 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.282000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Oct 29 04:53:07.288000 audit[1561]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1561 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.288000 audit[1561]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffeede92a00 a2=0 a3=7ffeede929ec items=0 ppid=1505 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.288000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 29 04:53:07.306826 kernel: Initializing XFRM netlink socket Oct 29 04:53:07.360554 env[1505]: time="2025-10-29T04:53:07.360488429Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Oct 29 04:53:07.407000 audit[1570]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1570 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.407000 audit[1570]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffd39fec3d0 a2=0 a3=7ffd39fec3bc items=0 ppid=1505 pid=1570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.407000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Oct 29 04:53:07.422000 audit[1573]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.422000 audit[1573]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffcbda36e40 a2=0 a3=7ffcbda36e2c items=0 ppid=1505 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.422000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Oct 29 04:53:07.428000 audit[1576]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.428000 audit[1576]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe75b942e0 a2=0 a3=7ffe75b942cc items=0 ppid=1505 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.428000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Oct 29 04:53:07.431000 audit[1578]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.431000 audit[1578]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc3f603c40 a2=0 a3=7ffc3f603c2c items=0 ppid=1505 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.431000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Oct 29 04:53:07.435000 audit[1580]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.435000 audit[1580]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffce95672b0 a2=0 a3=7ffce956729c items=0 ppid=1505 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.435000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Oct 29 04:53:07.438000 audit[1582]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.438000 audit[1582]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffc08ca1b40 a2=0 a3=7ffc08ca1b2c items=0 ppid=1505 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.438000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Oct 29 04:53:07.442000 audit[1584]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.442000 audit[1584]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffe6b2670b0 a2=0 a3=7ffe6b26709c items=0 ppid=1505 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.442000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Oct 29 04:53:07.455000 audit[1587]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1587 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.455000 audit[1587]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffd4e0e5b80 a2=0 a3=7ffd4e0e5b6c items=0 ppid=1505 pid=1587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.455000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Oct 29 04:53:07.459000 audit[1589]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1589 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.459000 audit[1589]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffdfbbf4c80 a2=0 a3=7ffdfbbf4c6c items=0 ppid=1505 pid=1589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.459000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Oct 29 04:53:07.463000 audit[1591]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.463000 audit[1591]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe97ca3b80 a2=0 a3=7ffe97ca3b6c items=0 ppid=1505 pid=1591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.463000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Oct 29 04:53:07.466000 audit[1593]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1593 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.466000 audit[1593]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd86c28320 a2=0 a3=7ffd86c2830c items=0 ppid=1505 pid=1593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.466000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Oct 29 04:53:07.468878 systemd-networkd[1070]: docker0: Link UP Oct 29 04:53:07.478000 audit[1597]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1597 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.478000 audit[1597]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc01809520 a2=0 a3=7ffc0180950c items=0 ppid=1505 pid=1597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.478000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Oct 29 04:53:07.483000 audit[1598]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:07.483000 audit[1598]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcfae986b0 a2=0 a3=7ffcfae9869c items=0 ppid=1505 pid=1598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:07.483000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 29 04:53:07.485754 env[1505]: time="2025-10-29T04:53:07.485626672Z" level=info msg="Loading containers: done." Oct 29 04:53:07.510618 env[1505]: time="2025-10-29T04:53:07.510565869Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 29 04:53:07.510930 env[1505]: time="2025-10-29T04:53:07.510896429Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Oct 29 04:53:07.511111 env[1505]: time="2025-10-29T04:53:07.511081825Z" level=info msg="Daemon has completed initialization" Oct 29 04:53:07.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:07.528159 systemd[1]: Started docker.service. Oct 29 04:53:07.539445 env[1505]: time="2025-10-29T04:53:07.539334953Z" level=info msg="API listen on /run/docker.sock" Oct 29 04:53:08.660818 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 29 04:53:08.661177 systemd[1]: Stopped kubelet.service. Oct 29 04:53:08.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:08.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:08.663835 systemd[1]: Starting kubelet.service... Oct 29 04:53:08.867894 systemd[1]: Started kubelet.service. Oct 29 04:53:08.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:08.930165 env[1306]: time="2025-10-29T04:53:08.929540265Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 29 04:53:08.966444 kubelet[1637]: E1029 04:53:08.966354 1637 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 04:53:08.968871 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 04:53:08.969204 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 04:53:08.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 29 04:53:09.862816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount585263559.mount: Deactivated successfully. Oct 29 04:53:12.242803 env[1306]: time="2025-10-29T04:53:12.242710976Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:12.246137 env[1306]: time="2025-10-29T04:53:12.246074043Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:12.248813 env[1306]: time="2025-10-29T04:53:12.248773289Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:12.251272 env[1306]: time="2025-10-29T04:53:12.251236217Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:12.252595 env[1306]: time="2025-10-29T04:53:12.252525996Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Oct 29 04:53:12.254060 env[1306]: time="2025-10-29T04:53:12.254011448Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 29 04:53:14.738366 env[1306]: time="2025-10-29T04:53:14.738280424Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:14.740584 env[1306]: time="2025-10-29T04:53:14.740542062Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:14.743073 env[1306]: time="2025-10-29T04:53:14.743036124Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:14.745567 env[1306]: time="2025-10-29T04:53:14.745527045Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:14.747013 env[1306]: time="2025-10-29T04:53:14.746911397Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Oct 29 04:53:14.747965 env[1306]: time="2025-10-29T04:53:14.747928681Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 29 04:53:16.766449 kernel: kauditd_printk_skb: 88 callbacks suppressed Oct 29 04:53:16.766755 kernel: audit: type=1131 audit(1761713596.755:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:16.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:16.756012 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 29 04:53:16.982252 env[1306]: time="2025-10-29T04:53:16.982183925Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:16.985451 env[1306]: time="2025-10-29T04:53:16.985415614Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:16.988120 env[1306]: time="2025-10-29T04:53:16.988076596Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:16.990625 env[1306]: time="2025-10-29T04:53:16.990589205Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:16.991854 env[1306]: time="2025-10-29T04:53:16.991811022Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Oct 29 04:53:16.992936 env[1306]: time="2025-10-29T04:53:16.992885212Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 29 04:53:19.160816 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 29 04:53:19.161146 systemd[1]: Stopped kubelet.service. Oct 29 04:53:19.170924 kernel: audit: type=1130 audit(1761713599.159:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:19.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:19.168396 systemd[1]: Starting kubelet.service... Oct 29 04:53:19.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:19.177416 kernel: audit: type=1131 audit(1761713599.159:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:19.319800 systemd[1]: Started kubelet.service. Oct 29 04:53:19.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:19.326398 kernel: audit: type=1130 audit(1761713599.318:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:19.453536 kubelet[1659]: E1029 04:53:19.452736 1659 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 04:53:19.457309 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 04:53:19.457691 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 04:53:19.467164 kernel: audit: type=1131 audit(1761713599.456:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 29 04:53:19.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 29 04:53:21.053212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount320560895.mount: Deactivated successfully. Oct 29 04:53:22.136720 env[1306]: time="2025-10-29T04:53:22.136602779Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:22.138646 env[1306]: time="2025-10-29T04:53:22.138606660Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:22.140570 env[1306]: time="2025-10-29T04:53:22.140533149Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:22.142170 env[1306]: time="2025-10-29T04:53:22.142131857Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:22.143133 env[1306]: time="2025-10-29T04:53:22.143078019Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Oct 29 04:53:22.144684 env[1306]: time="2025-10-29T04:53:22.144649605Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 29 04:53:22.928923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount637465504.mount: Deactivated successfully. Oct 29 04:53:24.466722 env[1306]: time="2025-10-29T04:53:24.466642737Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:24.468803 env[1306]: time="2025-10-29T04:53:24.468761725Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:24.471462 env[1306]: time="2025-10-29T04:53:24.471421300Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:24.474037 env[1306]: time="2025-10-29T04:53:24.474002433Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:24.475289 env[1306]: time="2025-10-29T04:53:24.475236002Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Oct 29 04:53:24.476101 env[1306]: time="2025-10-29T04:53:24.476050812Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 29 04:53:25.191902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2268093161.mount: Deactivated successfully. Oct 29 04:53:25.198526 env[1306]: time="2025-10-29T04:53:25.198457953Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:25.199974 env[1306]: time="2025-10-29T04:53:25.199938444Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:25.201727 env[1306]: time="2025-10-29T04:53:25.201692914Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:25.218491 env[1306]: time="2025-10-29T04:53:25.218341960Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:25.219712 env[1306]: time="2025-10-29T04:53:25.219665589Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 29 04:53:25.220825 env[1306]: time="2025-10-29T04:53:25.220789153Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 29 04:53:25.970579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2237174816.mount: Deactivated successfully. Oct 29 04:53:29.660639 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 29 04:53:29.676047 kernel: audit: type=1130 audit(1761713609.659:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:29.676170 kernel: audit: type=1131 audit(1761713609.659:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:29.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:29.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:29.660960 systemd[1]: Stopped kubelet.service. Oct 29 04:53:29.666347 systemd[1]: Starting kubelet.service... Oct 29 04:53:29.724055 env[1306]: time="2025-10-29T04:53:29.723319824Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:29.726738 env[1306]: time="2025-10-29T04:53:29.726696790Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:29.730836 env[1306]: time="2025-10-29T04:53:29.730413291Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:29.734316 env[1306]: time="2025-10-29T04:53:29.734268588Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:29.754962 env[1306]: time="2025-10-29T04:53:29.753056411Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Oct 29 04:53:30.203038 systemd[1]: Started kubelet.service. Oct 29 04:53:30.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:30.211399 kernel: audit: type=1130 audit(1761713610.202:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:30.296902 kubelet[1679]: E1029 04:53:30.296802 1679 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 04:53:30.299106 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 04:53:30.299934 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 04:53:30.305804 kernel: audit: type=1131 audit(1761713610.299:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 29 04:53:30.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 29 04:53:31.492121 update_engine[1294]: I1029 04:53:31.491999 1294 update_attempter.cc:509] Updating boot flags... Oct 29 04:53:34.309573 systemd[1]: Stopped kubelet.service. Oct 29 04:53:34.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:34.314663 systemd[1]: Starting kubelet.service... Oct 29 04:53:34.315452 kernel: audit: type=1130 audit(1761713614.308:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:34.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:34.321424 kernel: audit: type=1131 audit(1761713614.308:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:34.366770 systemd[1]: Reloading. Oct 29 04:53:34.532198 /usr/lib/systemd/system-generators/torcx-generator[1740]: time="2025-10-29T04:53:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 29 04:53:34.532261 /usr/lib/systemd/system-generators/torcx-generator[1740]: time="2025-10-29T04:53:34Z" level=info msg="torcx already run" Oct 29 04:53:34.649855 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 29 04:53:34.649893 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 29 04:53:34.679063 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 29 04:53:34.816140 systemd[1]: Started kubelet.service. Oct 29 04:53:34.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:34.825427 kernel: audit: type=1130 audit(1761713614.815:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:34.829677 systemd[1]: Stopping kubelet.service... Oct 29 04:53:34.830189 systemd[1]: kubelet.service: Deactivated successfully. Oct 29 04:53:34.830775 systemd[1]: Stopped kubelet.service. Oct 29 04:53:34.836917 kernel: audit: type=1131 audit(1761713614.829:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:34.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:34.835883 systemd[1]: Starting kubelet.service... Oct 29 04:53:35.148277 systemd[1]: Started kubelet.service. Oct 29 04:53:35.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:35.155412 kernel: audit: type=1130 audit(1761713615.147:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:35.224645 kubelet[1813]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 04:53:35.225283 kubelet[1813]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 04:53:35.225439 kubelet[1813]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 04:53:35.225799 kubelet[1813]: I1029 04:53:35.225752 1813 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 04:53:36.181212 kubelet[1813]: I1029 04:53:36.181160 1813 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 29 04:53:36.181212 kubelet[1813]: I1029 04:53:36.181203 1813 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 04:53:36.181660 kubelet[1813]: I1029 04:53:36.181635 1813 server.go:954] "Client rotation is on, will bootstrap in background" Oct 29 04:53:36.214597 kubelet[1813]: E1029 04:53:36.214542 1813 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.24.246:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.24.246:6443: connect: connection refused" logger="UnhandledError" Oct 29 04:53:36.216926 kubelet[1813]: I1029 04:53:36.216896 1813 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 04:53:36.225513 kubelet[1813]: E1029 04:53:36.225456 1813 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 29 04:53:36.226117 kubelet[1813]: I1029 04:53:36.226089 1813 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 29 04:53:36.233811 kubelet[1813]: I1029 04:53:36.233776 1813 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 29 04:53:36.235969 kubelet[1813]: I1029 04:53:36.235899 1813 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 04:53:36.236561 kubelet[1813]: I1029 04:53:36.236116 1813 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-xtjva.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Oct 29 04:53:36.236905 kubelet[1813]: I1029 04:53:36.236878 1813 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 04:53:36.237040 kubelet[1813]: I1029 04:53:36.237018 1813 container_manager_linux.go:304] "Creating device plugin manager" Oct 29 04:53:36.237399 kubelet[1813]: I1029 04:53:36.237365 1813 state_mem.go:36] "Initialized new in-memory state store" Oct 29 04:53:36.241344 kubelet[1813]: I1029 04:53:36.241319 1813 kubelet.go:446] "Attempting to sync node with API server" Oct 29 04:53:36.241565 kubelet[1813]: I1029 04:53:36.241515 1813 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 04:53:36.241756 kubelet[1813]: I1029 04:53:36.241731 1813 kubelet.go:352] "Adding apiserver pod source" Oct 29 04:53:36.241910 kubelet[1813]: I1029 04:53:36.241885 1813 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 04:53:36.250654 kubelet[1813]: W1029 04:53:36.250257 1813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.24.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-xtjva.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.24.246:6443: connect: connection refused Oct 29 04:53:36.250654 kubelet[1813]: E1029 04:53:36.250358 1813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.24.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-xtjva.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.24.246:6443: connect: connection refused" logger="UnhandledError" Oct 29 04:53:36.251455 kubelet[1813]: W1029 04:53:36.250971 1813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.24.246:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.24.246:6443: connect: connection refused Oct 29 04:53:36.251455 kubelet[1813]: E1029 04:53:36.251027 1813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.24.246:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.24.246:6443: connect: connection refused" logger="UnhandledError" Oct 29 04:53:36.251455 kubelet[1813]: I1029 04:53:36.251164 1813 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 29 04:53:36.251898 kubelet[1813]: I1029 04:53:36.251868 1813 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 29 04:53:36.252734 kubelet[1813]: W1029 04:53:36.252681 1813 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 29 04:53:36.258605 kubelet[1813]: I1029 04:53:36.258570 1813 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 29 04:53:36.258727 kubelet[1813]: I1029 04:53:36.258636 1813 server.go:1287] "Started kubelet" Oct 29 04:53:36.265095 kubelet[1813]: I1029 04:53:36.265034 1813 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 04:53:36.266065 kubelet[1813]: I1029 04:53:36.265992 1813 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 04:53:36.266931 kubelet[1813]: I1029 04:53:36.266897 1813 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 04:53:36.271433 kubelet[1813]: E1029 04:53:36.267504 1813 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.24.246:6443/api/v1/namespaces/default/events\": dial tcp 10.230.24.246:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-xtjva.gb1.brightbox.com.1872dd3335100f17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-xtjva.gb1.brightbox.com,UID:srv-xtjva.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-xtjva.gb1.brightbox.com,},FirstTimestamp:2025-10-29 04:53:36.258600727 +0000 UTC m=+1.100699725,LastTimestamp:2025-10-29 04:53:36.258600727 +0000 UTC m=+1.100699725,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-xtjva.gb1.brightbox.com,}" Oct 29 04:53:36.273554 kubelet[1813]: I1029 04:53:36.273508 1813 server.go:479] "Adding debug handlers to kubelet server" Oct 29 04:53:36.272000 audit[1813]: AVC avc: denied { mac_admin } for pid=1813 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:53:36.286413 kernel: audit: type=1400 audit(1761713616.272:221): avc: denied { mac_admin } for pid=1813 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:53:36.286581 kernel: audit: type=1401 audit(1761713616.272:221): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 29 04:53:36.286644 kernel: audit: type=1300 audit(1761713616.272:221): arch=c000003e syscall=188 success=no exit=-22 a0=c000b29170 a1=c000b2a8b8 a2=c000b29140 a3=25 items=0 ppid=1 pid=1813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:36.272000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 29 04:53:36.272000 audit[1813]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b29170 a1=c000b2a8b8 a2=c000b29140 a3=25 items=0 ppid=1 pid=1813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:36.287051 kubelet[1813]: I1029 04:53:36.282050 1813 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Oct 29 04:53:36.287051 kubelet[1813]: I1029 04:53:36.284860 1813 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Oct 29 04:53:36.287051 kubelet[1813]: I1029 04:53:36.285136 1813 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 04:53:36.272000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 29 04:53:36.295178 kubelet[1813]: I1029 04:53:36.295148 1813 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 04:53:36.280000 audit[1813]: AVC avc: denied { mac_admin } for pid=1813 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:53:36.304310 kernel: audit: type=1327 audit(1761713616.272:221): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 29 04:53:36.304427 kernel: audit: type=1400 audit(1761713616.280:222): avc: denied { mac_admin } for pid=1813 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:53:36.304515 kernel: audit: type=1401 audit(1761713616.280:222): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 29 04:53:36.280000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 29 04:53:36.280000 audit[1813]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b07260 a1=c000b2a8d0 a2=c000b29200 a3=25 items=0 ppid=1 pid=1813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:36.311834 kubelet[1813]: I1029 04:53:36.298872 1813 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 29 04:53:36.312433 kubelet[1813]: I1029 04:53:36.298902 1813 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 29 04:53:36.312610 kubelet[1813]: E1029 04:53:36.298971 1813 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-xtjva.gb1.brightbox.com\" not found" Oct 29 04:53:36.312750 kubelet[1813]: W1029 04:53:36.309410 1813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.24.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.24.246:6443: connect: connection refused Oct 29 04:53:36.312991 kubelet[1813]: E1029 04:53:36.312925 1813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.24.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.24.246:6443: connect: connection refused" logger="UnhandledError" Oct 29 04:53:36.313134 kubelet[1813]: E1029 04:53:36.309601 1813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.24.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-xtjva.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.24.246:6443: connect: connection refused" interval="200ms" Oct 29 04:53:36.313434 kubelet[1813]: I1029 04:53:36.313413 1813 reconciler.go:26] "Reconciler: start to sync state" Oct 29 04:53:36.314564 kernel: audit: type=1300 audit(1761713616.280:222): arch=c000003e syscall=188 success=no exit=-22 a0=c000b07260 a1=c000b2a8d0 a2=c000b29200 a3=25 items=0 ppid=1 pid=1813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:36.280000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 29 04:53:36.317099 kubelet[1813]: I1029 04:53:36.317049 1813 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 04:53:36.317000 audit[1825]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1825 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:36.317000 audit[1825]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff185d19b0 a2=0 a3=7fff185d199c items=0 ppid=1813 pid=1825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:36.317000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 29 04:53:36.321133 kubelet[1813]: I1029 04:53:36.321103 1813 factory.go:221] Registration of the containerd container factory successfully Oct 29 04:53:36.321282 kubelet[1813]: I1029 04:53:36.321260 1813 factory.go:221] Registration of the systemd container factory successfully Oct 29 04:53:36.322000 audit[1827]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1827 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:36.322000 audit[1827]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb12eaa00 a2=0 a3=7fffb12ea9ec items=0 ppid=1813 pid=1827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:36.322000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 29 04:53:36.324013 kubelet[1813]: E1029 04:53:36.323652 1813 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 04:53:36.328000 audit[1830]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1830 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:36.328000 audit[1830]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcf1597690 a2=0 a3=7ffcf159767c items=0 ppid=1813 pid=1830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:36.328000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 29 04:53:36.332000 audit[1833]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1833 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:36.332000 audit[1833]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc9e537a60 a2=0 a3=7ffc9e537a4c items=0 ppid=1813 pid=1833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:36.332000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 29 04:53:36.349000 audit[1836]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1836 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:36.349000 audit[1836]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffefd41ba40 a2=0 a3=7ffefd41ba2c items=0 ppid=1813 pid=1836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:36.349000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 29 04:53:36.351192 kubelet[1813]: I1029 04:53:36.351048 1813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 29 04:53:36.350000 audit[1837]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1837 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:36.350000 audit[1837]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffde0404960 a2=0 a3=7ffde040494c items=0 ppid=1813 pid=1837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:36.350000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 29 04:53:36.352683 kubelet[1813]: I1029 04:53:36.352504 1813 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 29 04:53:36.352683 kubelet[1813]: I1029 04:53:36.352572 1813 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 29 04:53:36.352683 kubelet[1813]: I1029 04:53:36.352624 1813 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 04:53:36.352683 kubelet[1813]: I1029 04:53:36.352646 1813 kubelet.go:2382] "Starting kubelet main sync loop" Oct 29 04:53:36.352894 kubelet[1813]: E1029 04:53:36.352747 1813 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 04:53:36.355000 audit[1840]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1840 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:36.355000 audit[1840]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff94629200 a2=0 a3=7fff946291ec items=0 ppid=1813 pid=1840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:36.355000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 29 04:53:36.357116 kubelet[1813]: W1029 04:53:36.357078 1813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.24.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.24.246:6443: connect: connection refused Oct 29 04:53:36.357637 kubelet[1813]: E1029 04:53:36.357132 1813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.24.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.24.246:6443: connect: connection refused" logger="UnhandledError" Oct 29 04:53:36.358000 audit[1841]: NETFILTER_CFG table=nat:33 family=10 entries=2 op=nft_register_chain pid=1841 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:36.358000 audit[1841]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffcfcc99250 a2=0 a3=7ffcfcc9923c items=0 ppid=1813 pid=1841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:36.358000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 29 04:53:36.360000 audit[1842]: NETFILTER_CFG table=filter:34 family=10 entries=2 op=nft_register_chain pid=1842 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:36.360000 audit[1842]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff9e20f090 a2=0 a3=7fff9e20f07c items=0 ppid=1813 pid=1842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:36.360000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 29 04:53:36.362000 audit[1838]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=1838 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:36.362000 audit[1838]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd433ab070 a2=0 a3=7ffd433ab05c items=0 ppid=1813 pid=1838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:36.362000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 29 04:53:36.364000 audit[1843]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_chain pid=1843 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:36.364000 audit[1843]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe429b6540 a2=0 a3=7ffe429b652c items=0 ppid=1813 pid=1843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:36.364000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 29 04:53:36.369000 audit[1844]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_chain pid=1844 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:36.369000 audit[1844]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe79b3db30 a2=0 a3=7ffe79b3db1c items=0 ppid=1813 pid=1844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:36.369000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 29 04:53:36.383526 kubelet[1813]: I1029 04:53:36.383475 1813 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 04:53:36.383526 kubelet[1813]: I1029 04:53:36.383505 1813 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 04:53:36.383526 kubelet[1813]: I1029 04:53:36.383555 1813 state_mem.go:36] "Initialized new in-memory state store" Oct 29 04:53:36.385163 kubelet[1813]: I1029 04:53:36.385138 1813 policy_none.go:49] "None policy: Start" Oct 29 04:53:36.385262 kubelet[1813]: I1029 04:53:36.385180 1813 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 29 04:53:36.385262 kubelet[1813]: I1029 04:53:36.385211 1813 state_mem.go:35] "Initializing new in-memory state store" Oct 29 04:53:36.393127 kubelet[1813]: I1029 04:53:36.393089 1813 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 29 04:53:36.391000 audit[1813]: AVC avc: denied { mac_admin } for pid=1813 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:53:36.391000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 29 04:53:36.391000 audit[1813]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ee0810 a1=c000e851e8 a2=c000ee07e0 a3=25 items=0 ppid=1 pid=1813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:36.391000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 29 04:53:36.393676 kubelet[1813]: I1029 04:53:36.393234 1813 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Oct 29 04:53:36.393676 kubelet[1813]: I1029 04:53:36.393437 1813 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 04:53:36.393676 kubelet[1813]: I1029 04:53:36.393463 1813 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 04:53:36.396238 kubelet[1813]: I1029 04:53:36.396031 1813 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 04:53:36.398141 kubelet[1813]: E1029 04:53:36.398094 1813 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 04:53:36.398235 kubelet[1813]: E1029 04:53:36.398178 1813 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-xtjva.gb1.brightbox.com\" not found" Oct 29 04:53:36.467210 kubelet[1813]: E1029 04:53:36.465045 1813 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-xtjva.gb1.brightbox.com\" not found" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:36.471169 kubelet[1813]: E1029 04:53:36.471138 1813 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-xtjva.gb1.brightbox.com\" not found" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:36.479163 kubelet[1813]: E1029 04:53:36.479126 1813 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-xtjva.gb1.brightbox.com\" not found" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:36.499140 kubelet[1813]: I1029 04:53:36.499107 1813 kubelet_node_status.go:75] "Attempting to register node" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:36.499687 kubelet[1813]: E1029 04:53:36.499653 1813 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.24.246:6443/api/v1/nodes\": dial tcp 10.230.24.246:6443: connect: connection refused" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:36.514199 kubelet[1813]: I1029 04:53:36.514141 1813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4dd9e47397cd512feb3ae413cd9000fa-k8s-certs\") pod \"kube-apiserver-srv-xtjva.gb1.brightbox.com\" (UID: \"4dd9e47397cd512feb3ae413cd9000fa\") " pod="kube-system/kube-apiserver-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:36.514308 kubelet[1813]: I1029 04:53:36.514237 1813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4dd9e47397cd512feb3ae413cd9000fa-usr-share-ca-certificates\") pod \"kube-apiserver-srv-xtjva.gb1.brightbox.com\" (UID: \"4dd9e47397cd512feb3ae413cd9000fa\") " pod="kube-system/kube-apiserver-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:36.514308 kubelet[1813]: I1029 04:53:36.514292 1813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b298e803921018e3493da58940023ea8-k8s-certs\") pod \"kube-controller-manager-srv-xtjva.gb1.brightbox.com\" (UID: \"b298e803921018e3493da58940023ea8\") " pod="kube-system/kube-controller-manager-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:36.514497 kubelet[1813]: I1029 04:53:36.514320 1813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b298e803921018e3493da58940023ea8-kubeconfig\") pod \"kube-controller-manager-srv-xtjva.gb1.brightbox.com\" (UID: \"b298e803921018e3493da58940023ea8\") " pod="kube-system/kube-controller-manager-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:36.514497 kubelet[1813]: I1029 04:53:36.514347 1813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee2b7c8cca7f94942e4d14e94ebd62a8-kubeconfig\") pod \"kube-scheduler-srv-xtjva.gb1.brightbox.com\" (UID: \"ee2b7c8cca7f94942e4d14e94ebd62a8\") " pod="kube-system/kube-scheduler-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:36.514497 kubelet[1813]: I1029 04:53:36.514432 1813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4dd9e47397cd512feb3ae413cd9000fa-ca-certs\") pod \"kube-apiserver-srv-xtjva.gb1.brightbox.com\" (UID: \"4dd9e47397cd512feb3ae413cd9000fa\") " pod="kube-system/kube-apiserver-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:36.514497 kubelet[1813]: I1029 04:53:36.514462 1813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b298e803921018e3493da58940023ea8-ca-certs\") pod \"kube-controller-manager-srv-xtjva.gb1.brightbox.com\" (UID: \"b298e803921018e3493da58940023ea8\") " pod="kube-system/kube-controller-manager-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:36.514497 kubelet[1813]: I1029 04:53:36.514492 1813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b298e803921018e3493da58940023ea8-flexvolume-dir\") pod \"kube-controller-manager-srv-xtjva.gb1.brightbox.com\" (UID: \"b298e803921018e3493da58940023ea8\") " pod="kube-system/kube-controller-manager-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:36.514783 kubelet[1813]: I1029 04:53:36.514523 1813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b298e803921018e3493da58940023ea8-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-xtjva.gb1.brightbox.com\" (UID: \"b298e803921018e3493da58940023ea8\") " pod="kube-system/kube-controller-manager-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:36.515002 kubelet[1813]: E1029 04:53:36.514957 1813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.24.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-xtjva.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.24.246:6443: connect: connection refused" interval="400ms" Oct 29 04:53:36.703610 kubelet[1813]: I1029 04:53:36.703568 1813 kubelet_node_status.go:75] "Attempting to register node" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:36.704344 kubelet[1813]: E1029 04:53:36.704294 1813 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.24.246:6443/api/v1/nodes\": dial tcp 10.230.24.246:6443: connect: connection refused" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:36.767720 env[1306]: time="2025-10-29T04:53:36.766994940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-xtjva.gb1.brightbox.com,Uid:b298e803921018e3493da58940023ea8,Namespace:kube-system,Attempt:0,}" Oct 29 04:53:36.772410 env[1306]: time="2025-10-29T04:53:36.772322312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-xtjva.gb1.brightbox.com,Uid:4dd9e47397cd512feb3ae413cd9000fa,Namespace:kube-system,Attempt:0,}" Oct 29 04:53:36.780652 env[1306]: time="2025-10-29T04:53:36.780598930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-xtjva.gb1.brightbox.com,Uid:ee2b7c8cca7f94942e4d14e94ebd62a8,Namespace:kube-system,Attempt:0,}" Oct 29 04:53:36.919173 kubelet[1813]: E1029 04:53:36.915981 1813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.24.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-xtjva.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.24.246:6443: connect: connection refused" interval="800ms" Oct 29 04:53:37.108538 kubelet[1813]: I1029 04:53:37.108357 1813 kubelet_node_status.go:75] "Attempting to register node" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:37.109293 kubelet[1813]: E1029 04:53:37.109235 1813 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.24.246:6443/api/v1/nodes\": dial tcp 10.230.24.246:6443: connect: connection refused" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:37.220689 kubelet[1813]: W1029 04:53:37.220611 1813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.24.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.24.246:6443: connect: connection refused Oct 29 04:53:37.220907 kubelet[1813]: E1029 04:53:37.220730 1813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.24.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.24.246:6443: connect: connection refused" logger="UnhandledError" Oct 29 04:53:37.276845 kubelet[1813]: W1029 04:53:37.276726 1813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.24.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-xtjva.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.24.246:6443: connect: connection refused Oct 29 04:53:37.277647 kubelet[1813]: E1029 04:53:37.276850 1813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.24.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-xtjva.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.24.246:6443: connect: connection refused" logger="UnhandledError" Oct 29 04:53:37.345193 kubelet[1813]: W1029 04:53:37.345092 1813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.24.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.24.246:6443: connect: connection refused Oct 29 04:53:37.345193 kubelet[1813]: E1029 04:53:37.345164 1813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.24.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.24.246:6443: connect: connection refused" logger="UnhandledError" Oct 29 04:53:37.363866 kubelet[1813]: W1029 04:53:37.363651 1813 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.24.246:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.24.246:6443: connect: connection refused Oct 29 04:53:37.363866 kubelet[1813]: E1029 04:53:37.363729 1813 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.24.246:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.24.246:6443: connect: connection refused" logger="UnhandledError" Oct 29 04:53:37.555846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1196391380.mount: Deactivated successfully. Oct 29 04:53:37.561499 env[1306]: time="2025-10-29T04:53:37.561429473Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:37.565301 env[1306]: time="2025-10-29T04:53:37.565253967Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:37.566642 env[1306]: time="2025-10-29T04:53:37.566599580Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:37.567559 env[1306]: time="2025-10-29T04:53:37.567501530Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:37.568476 env[1306]: time="2025-10-29T04:53:37.568441428Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:37.571208 env[1306]: time="2025-10-29T04:53:37.571169768Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:37.574503 env[1306]: time="2025-10-29T04:53:37.574466733Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:37.576119 env[1306]: time="2025-10-29T04:53:37.576082418Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:37.579913 env[1306]: time="2025-10-29T04:53:37.579872809Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:37.589488 env[1306]: time="2025-10-29T04:53:37.589436558Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:37.590408 env[1306]: time="2025-10-29T04:53:37.590366846Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:37.594363 env[1306]: time="2025-10-29T04:53:37.594318762Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:37.607408 env[1306]: time="2025-10-29T04:53:37.607038441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 04:53:37.607408 env[1306]: time="2025-10-29T04:53:37.607120852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 04:53:37.607408 env[1306]: time="2025-10-29T04:53:37.607147139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 04:53:37.608686 env[1306]: time="2025-10-29T04:53:37.608567313Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2b4bd3d634b74a4f4f755269b2912b872c7ce307b844bec74802e92abedce1a4 pid=1855 runtime=io.containerd.runc.v2 Oct 29 04:53:37.623561 env[1306]: time="2025-10-29T04:53:37.619965986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 04:53:37.623561 env[1306]: time="2025-10-29T04:53:37.620047138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 04:53:37.623561 env[1306]: time="2025-10-29T04:53:37.620066137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 04:53:37.623561 env[1306]: time="2025-10-29T04:53:37.620337993Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/21e99ed6f54bde610bdbfccfd5afc1230b34307bf2d9e1c93833d70191680725 pid=1873 runtime=io.containerd.runc.v2 Oct 29 04:53:37.670945 env[1306]: time="2025-10-29T04:53:37.670835798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 04:53:37.671347 env[1306]: time="2025-10-29T04:53:37.671280961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 04:53:37.671634 env[1306]: time="2025-10-29T04:53:37.671585098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 04:53:37.672129 env[1306]: time="2025-10-29T04:53:37.672048592Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/13118dfb357f32500de4fc0b2a501a69da8cb57de875eb618eb87a5fa0c4f118 pid=1907 runtime=io.containerd.runc.v2 Oct 29 04:53:37.717848 kubelet[1813]: E1029 04:53:37.717757 1813 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.24.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-xtjva.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.24.246:6443: connect: connection refused" interval="1.6s" Oct 29 04:53:37.796156 env[1306]: time="2025-10-29T04:53:37.796089622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-xtjva.gb1.brightbox.com,Uid:ee2b7c8cca7f94942e4d14e94ebd62a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b4bd3d634b74a4f4f755269b2912b872c7ce307b844bec74802e92abedce1a4\"" Oct 29 04:53:37.801110 env[1306]: time="2025-10-29T04:53:37.801071912Z" level=info msg="CreateContainer within sandbox \"2b4bd3d634b74a4f4f755269b2912b872c7ce307b844bec74802e92abedce1a4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 29 04:53:37.801600 env[1306]: time="2025-10-29T04:53:37.801332800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-xtjva.gb1.brightbox.com,Uid:4dd9e47397cd512feb3ae413cd9000fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"21e99ed6f54bde610bdbfccfd5afc1230b34307bf2d9e1c93833d70191680725\"" Oct 29 04:53:37.805428 env[1306]: time="2025-10-29T04:53:37.805368010Z" level=info msg="CreateContainer within sandbox \"21e99ed6f54bde610bdbfccfd5afc1230b34307bf2d9e1c93833d70191680725\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 29 04:53:37.812063 env[1306]: time="2025-10-29T04:53:37.812012871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-xtjva.gb1.brightbox.com,Uid:b298e803921018e3493da58940023ea8,Namespace:kube-system,Attempt:0,} returns sandbox id \"13118dfb357f32500de4fc0b2a501a69da8cb57de875eb618eb87a5fa0c4f118\"" Oct 29 04:53:37.816806 env[1306]: time="2025-10-29T04:53:37.816749315Z" level=info msg="CreateContainer within sandbox \"13118dfb357f32500de4fc0b2a501a69da8cb57de875eb618eb87a5fa0c4f118\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 29 04:53:37.822799 env[1306]: time="2025-10-29T04:53:37.822751367Z" level=info msg="CreateContainer within sandbox \"2b4bd3d634b74a4f4f755269b2912b872c7ce307b844bec74802e92abedce1a4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1593b73100950ad9ce9c7558e9e52c1a42b110eadc02e36c0de66dbe2396fbaa\"" Oct 29 04:53:37.823890 env[1306]: time="2025-10-29T04:53:37.823829182Z" level=info msg="StartContainer for \"1593b73100950ad9ce9c7558e9e52c1a42b110eadc02e36c0de66dbe2396fbaa\"" Oct 29 04:53:37.828027 env[1306]: time="2025-10-29T04:53:37.827984482Z" level=info msg="CreateContainer within sandbox \"21e99ed6f54bde610bdbfccfd5afc1230b34307bf2d9e1c93833d70191680725\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3d441894b6a71334474e10859de6374a3191ee3c45fdf3432227c03a5fd144ec\"" Oct 29 04:53:37.828991 env[1306]: time="2025-10-29T04:53:37.828956598Z" level=info msg="StartContainer for \"3d441894b6a71334474e10859de6374a3191ee3c45fdf3432227c03a5fd144ec\"" Oct 29 04:53:37.836637 env[1306]: time="2025-10-29T04:53:37.836586990Z" level=info msg="CreateContainer within sandbox \"13118dfb357f32500de4fc0b2a501a69da8cb57de875eb618eb87a5fa0c4f118\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b843e5b5b68d4c2ebd7a58d3f8a2b2a76f0d1068ff2e2ddd1e484381226e616d\"" Oct 29 04:53:37.837364 env[1306]: time="2025-10-29T04:53:37.837330404Z" level=info msg="StartContainer for \"b843e5b5b68d4c2ebd7a58d3f8a2b2a76f0d1068ff2e2ddd1e484381226e616d\"" Oct 29 04:53:37.918704 kubelet[1813]: I1029 04:53:37.917820 1813 kubelet_node_status.go:75] "Attempting to register node" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:37.918704 kubelet[1813]: E1029 04:53:37.918614 1813 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.24.246:6443/api/v1/nodes\": dial tcp 10.230.24.246:6443: connect: connection refused" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:37.985131 env[1306]: time="2025-10-29T04:53:37.985049366Z" level=info msg="StartContainer for \"1593b73100950ad9ce9c7558e9e52c1a42b110eadc02e36c0de66dbe2396fbaa\" returns successfully" Oct 29 04:53:38.035802 env[1306]: time="2025-10-29T04:53:38.035735053Z" level=info msg="StartContainer for \"3d441894b6a71334474e10859de6374a3191ee3c45fdf3432227c03a5fd144ec\" returns successfully" Oct 29 04:53:38.059610 env[1306]: time="2025-10-29T04:53:38.059500393Z" level=info msg="StartContainer for \"b843e5b5b68d4c2ebd7a58d3f8a2b2a76f0d1068ff2e2ddd1e484381226e616d\" returns successfully" Oct 29 04:53:38.370720 kubelet[1813]: E1029 04:53:38.370210 1813 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-xtjva.gb1.brightbox.com\" not found" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:38.371902 kubelet[1813]: E1029 04:53:38.371874 1813 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-xtjva.gb1.brightbox.com\" not found" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:38.384084 kubelet[1813]: E1029 04:53:38.384021 1813 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-xtjva.gb1.brightbox.com\" not found" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:38.412346 kubelet[1813]: E1029 04:53:38.412285 1813 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.24.246:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.24.246:6443: connect: connection refused" logger="UnhandledError" Oct 29 04:53:39.383396 kubelet[1813]: E1029 04:53:39.383337 1813 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-xtjva.gb1.brightbox.com\" not found" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:39.384432 kubelet[1813]: E1029 04:53:39.384404 1813 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-xtjva.gb1.brightbox.com\" not found" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:39.385747 kubelet[1813]: E1029 04:53:39.385717 1813 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-xtjva.gb1.brightbox.com\" not found" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:39.522746 kubelet[1813]: I1029 04:53:39.522702 1813 kubelet_node_status.go:75] "Attempting to register node" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:40.386094 kubelet[1813]: E1029 04:53:40.385691 1813 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-xtjva.gb1.brightbox.com\" not found" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:40.744707 kubelet[1813]: E1029 04:53:40.744656 1813 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-xtjva.gb1.brightbox.com\" not found" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:40.905362 kubelet[1813]: I1029 04:53:40.905289 1813 kubelet_node_status.go:78] "Successfully registered node" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:41.000506 kubelet[1813]: I1029 04:53:41.000280 1813 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:41.008214 kubelet[1813]: E1029 04:53:41.008167 1813 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-xtjva.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:41.008365 kubelet[1813]: I1029 04:53:41.008208 1813 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:41.010435 kubelet[1813]: E1029 04:53:41.010400 1813 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-xtjva.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:41.010435 kubelet[1813]: I1029 04:53:41.010434 1813 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:41.014036 kubelet[1813]: E1029 04:53:41.013993 1813 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-xtjva.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:41.254216 kubelet[1813]: I1029 04:53:41.254065 1813 apiserver.go:52] "Watching apiserver" Oct 29 04:53:41.312814 kubelet[1813]: I1029 04:53:41.312719 1813 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 29 04:53:43.160734 systemd[1]: Reloading. Oct 29 04:53:43.286922 /usr/lib/systemd/system-generators/torcx-generator[2110]: time="2025-10-29T04:53:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 29 04:53:43.286981 /usr/lib/systemd/system-generators/torcx-generator[2110]: time="2025-10-29T04:53:43Z" level=info msg="torcx already run" Oct 29 04:53:43.452317 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 29 04:53:43.452845 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 29 04:53:43.487474 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 29 04:53:43.639043 systemd[1]: Stopping kubelet.service... Oct 29 04:53:43.661215 systemd[1]: kubelet.service: Deactivated successfully. Oct 29 04:53:43.662061 systemd[1]: Stopped kubelet.service. Oct 29 04:53:43.672870 kernel: kauditd_printk_skb: 41 callbacks suppressed Oct 29 04:53:43.673162 kernel: audit: type=1131 audit(1761713623.660:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:43.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:43.680635 systemd[1]: Starting kubelet.service... Oct 29 04:53:45.057306 systemd[1]: Started kubelet.service. Oct 29 04:53:45.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:45.070942 kernel: audit: type=1130 audit(1761713625.056:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:53:45.253956 kubelet[2172]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 04:53:45.253956 kubelet[2172]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 04:53:45.253956 kubelet[2172]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 04:53:45.254806 kubelet[2172]: I1029 04:53:45.253955 2172 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 04:53:45.264492 kubelet[2172]: I1029 04:53:45.264328 2172 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 29 04:53:45.264492 kubelet[2172]: I1029 04:53:45.264391 2172 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 04:53:45.264807 kubelet[2172]: I1029 04:53:45.264773 2172 server.go:954] "Client rotation is on, will bootstrap in background" Oct 29 04:53:45.275093 kubelet[2172]: I1029 04:53:45.274174 2172 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 29 04:53:45.298367 kubelet[2172]: I1029 04:53:45.298016 2172 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 04:53:45.307475 kubelet[2172]: E1029 04:53:45.306731 2172 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 29 04:53:45.307475 kubelet[2172]: I1029 04:53:45.306832 2172 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 29 04:53:45.315626 kubelet[2172]: I1029 04:53:45.315185 2172 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 29 04:53:45.316076 kubelet[2172]: I1029 04:53:45.316019 2172 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 04:53:45.329693 kubelet[2172]: I1029 04:53:45.316073 2172 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-xtjva.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Oct 29 04:53:45.330058 kubelet[2172]: I1029 04:53:45.329729 2172 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 04:53:45.330058 kubelet[2172]: I1029 04:53:45.329755 2172 container_manager_linux.go:304] "Creating device plugin manager" Oct 29 04:53:45.330058 kubelet[2172]: I1029 04:53:45.329875 2172 state_mem.go:36] "Initialized new in-memory state store" Oct 29 04:53:45.330259 kubelet[2172]: I1029 04:53:45.330195 2172 kubelet.go:446] "Attempting to sync node with API server" Oct 29 04:53:45.330259 kubelet[2172]: I1029 04:53:45.330240 2172 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 04:53:45.330416 kubelet[2172]: I1029 04:53:45.330293 2172 kubelet.go:352] "Adding apiserver pod source" Oct 29 04:53:45.330416 kubelet[2172]: I1029 04:53:45.330321 2172 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 04:53:45.357638 kubelet[2172]: I1029 04:53:45.351955 2172 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 29 04:53:45.357638 kubelet[2172]: I1029 04:53:45.353788 2172 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 29 04:53:45.357638 kubelet[2172]: I1029 04:53:45.356274 2172 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 29 04:53:45.357638 kubelet[2172]: I1029 04:53:45.356334 2172 server.go:1287] "Started kubelet" Oct 29 04:53:45.371000 audit[2172]: AVC avc: denied { mac_admin } for pid=2172 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:53:45.379467 kernel: audit: type=1400 audit(1761713625.371:238): avc: denied { mac_admin } for pid=2172 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:53:45.379668 kernel: audit: type=1401 audit(1761713625.371:238): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 29 04:53:45.371000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 29 04:53:45.383692 kernel: audit: type=1300 audit(1761713625.371:238): arch=c000003e syscall=188 success=no exit=-22 a0=c0009b1c50 a1=c0009d09f0 a2=c0009b1c20 a3=25 items=0 ppid=1 pid=2172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:45.371000 audit[2172]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009b1c50 a1=c0009d09f0 a2=c0009b1c20 a3=25 items=0 ppid=1 pid=2172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:45.384109 kubelet[2172]: I1029 04:53:45.381641 2172 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Oct 29 04:53:45.384109 kubelet[2172]: I1029 04:53:45.381822 2172 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Oct 29 04:53:45.384109 kubelet[2172]: I1029 04:53:45.381991 2172 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 04:53:45.391701 kubelet[2172]: I1029 04:53:45.390780 2172 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 04:53:45.393046 kubelet[2172]: I1029 04:53:45.392878 2172 server.go:479] "Adding debug handlers to kubelet server" Oct 29 04:53:45.371000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 29 04:53:45.403204 kernel: audit: type=1327 audit(1761713625.371:238): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 29 04:53:45.403720 kubelet[2172]: I1029 04:53:45.395131 2172 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 04:53:45.404032 kubelet[2172]: I1029 04:53:45.404004 2172 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 04:53:45.404431 kubelet[2172]: I1029 04:53:45.404394 2172 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 04:53:45.380000 audit[2172]: AVC avc: denied { mac_admin } for pid=2172 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:53:45.413893 kernel: audit: type=1400 audit(1761713625.380:239): avc: denied { mac_admin } for pid=2172 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:53:45.380000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 29 04:53:45.423829 kernel: audit: type=1401 audit(1761713625.380:239): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 29 04:53:45.424021 kernel: audit: type=1300 audit(1761713625.380:239): arch=c000003e syscall=188 success=no exit=-22 a0=c0008ef700 a1=c0009d0a08 a2=c0009b1ce0 a3=25 items=0 ppid=1 pid=2172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:45.380000 audit[2172]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0008ef700 a1=c0009d0a08 a2=c0009b1ce0 a3=25 items=0 ppid=1 pid=2172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:45.424240 kubelet[2172]: I1029 04:53:45.419115 2172 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 29 04:53:45.380000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 29 04:53:45.431363 kubelet[2172]: I1029 04:53:45.428422 2172 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 29 04:53:45.431363 kubelet[2172]: I1029 04:53:45.428647 2172 reconciler.go:26] "Reconciler: start to sync state" Oct 29 04:53:45.434198 kubelet[2172]: I1029 04:53:45.434135 2172 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 29 04:53:45.440557 kernel: audit: type=1327 audit(1761713625.380:239): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 29 04:53:45.440763 kubelet[2172]: I1029 04:53:45.440056 2172 factory.go:221] Registration of the systemd container factory successfully Oct 29 04:53:45.440763 kubelet[2172]: I1029 04:53:45.440242 2172 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 04:53:45.455877 kubelet[2172]: I1029 04:53:45.455050 2172 factory.go:221] Registration of the containerd container factory successfully Oct 29 04:53:45.457783 kubelet[2172]: E1029 04:53:45.456480 2172 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 04:53:45.475096 kubelet[2172]: I1029 04:53:45.475047 2172 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 29 04:53:45.475506 kubelet[2172]: I1029 04:53:45.475348 2172 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 29 04:53:45.475697 kubelet[2172]: I1029 04:53:45.475670 2172 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 04:53:45.482680 kubelet[2172]: I1029 04:53:45.482650 2172 kubelet.go:2382] "Starting kubelet main sync loop" Oct 29 04:53:45.483020 kubelet[2172]: E1029 04:53:45.482951 2172 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 04:53:45.585079 kubelet[2172]: E1029 04:53:45.584135 2172 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 29 04:53:45.596303 kubelet[2172]: I1029 04:53:45.596255 2172 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 04:53:45.596573 kubelet[2172]: I1029 04:53:45.596546 2172 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 04:53:45.596733 kubelet[2172]: I1029 04:53:45.596711 2172 state_mem.go:36] "Initialized new in-memory state store" Oct 29 04:53:45.597123 kubelet[2172]: I1029 04:53:45.597096 2172 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 29 04:53:45.597281 kubelet[2172]: I1029 04:53:45.597238 2172 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 29 04:53:45.597434 kubelet[2172]: I1029 04:53:45.597410 2172 policy_none.go:49] "None policy: Start" Oct 29 04:53:45.597588 kubelet[2172]: I1029 04:53:45.597564 2172 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 29 04:53:45.597738 kubelet[2172]: I1029 04:53:45.597715 2172 state_mem.go:35] "Initializing new in-memory state store" Oct 29 04:53:45.598030 kubelet[2172]: I1029 04:53:45.598006 2172 state_mem.go:75] "Updated machine memory state" Oct 29 04:53:45.601123 kubelet[2172]: I1029 04:53:45.601095 2172 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 29 04:53:45.599000 audit[2172]: AVC avc: denied { mac_admin } for pid=2172 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:53:45.599000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 29 04:53:45.599000 audit[2172]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0010a22d0 a1=c0010a4168 a2=c0010a22a0 a3=25 items=0 ppid=1 pid=2172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:45.599000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 29 04:53:45.601912 kubelet[2172]: I1029 04:53:45.601879 2172 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Oct 29 04:53:45.602280 kubelet[2172]: I1029 04:53:45.602253 2172 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 04:53:45.602501 kubelet[2172]: I1029 04:53:45.602426 2172 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 04:53:45.602940 kubelet[2172]: I1029 04:53:45.602915 2172 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 04:53:45.608163 kubelet[2172]: E1029 04:53:45.606736 2172 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 04:53:45.735985 kubelet[2172]: I1029 04:53:45.735932 2172 kubelet_node_status.go:75] "Attempting to register node" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:45.751023 kubelet[2172]: I1029 04:53:45.750675 2172 kubelet_node_status.go:124] "Node was previously registered" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:45.751023 kubelet[2172]: I1029 04:53:45.750874 2172 kubelet_node_status.go:78] "Successfully registered node" node="srv-xtjva.gb1.brightbox.com" Oct 29 04:53:45.786756 kubelet[2172]: I1029 04:53:45.786671 2172 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:45.786971 kubelet[2172]: I1029 04:53:45.786944 2172 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:45.790239 kubelet[2172]: I1029 04:53:45.787830 2172 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:45.797057 kubelet[2172]: W1029 04:53:45.796173 2172 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 29 04:53:45.797057 kubelet[2172]: W1029 04:53:45.796363 2172 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 29 04:53:45.800244 kubelet[2172]: W1029 04:53:45.800209 2172 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 29 04:53:45.831294 kubelet[2172]: I1029 04:53:45.831097 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4dd9e47397cd512feb3ae413cd9000fa-usr-share-ca-certificates\") pod \"kube-apiserver-srv-xtjva.gb1.brightbox.com\" (UID: \"4dd9e47397cd512feb3ae413cd9000fa\") " pod="kube-system/kube-apiserver-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:45.831294 kubelet[2172]: I1029 04:53:45.831150 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b298e803921018e3493da58940023ea8-ca-certs\") pod \"kube-controller-manager-srv-xtjva.gb1.brightbox.com\" (UID: \"b298e803921018e3493da58940023ea8\") " pod="kube-system/kube-controller-manager-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:45.831294 kubelet[2172]: I1029 04:53:45.831266 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b298e803921018e3493da58940023ea8-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-xtjva.gb1.brightbox.com\" (UID: \"b298e803921018e3493da58940023ea8\") " pod="kube-system/kube-controller-manager-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:45.831653 kubelet[2172]: I1029 04:53:45.831323 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4dd9e47397cd512feb3ae413cd9000fa-ca-certs\") pod \"kube-apiserver-srv-xtjva.gb1.brightbox.com\" (UID: \"4dd9e47397cd512feb3ae413cd9000fa\") " pod="kube-system/kube-apiserver-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:45.831653 kubelet[2172]: I1029 04:53:45.831475 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4dd9e47397cd512feb3ae413cd9000fa-k8s-certs\") pod \"kube-apiserver-srv-xtjva.gb1.brightbox.com\" (UID: \"4dd9e47397cd512feb3ae413cd9000fa\") " pod="kube-system/kube-apiserver-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:45.831653 kubelet[2172]: I1029 04:53:45.831548 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b298e803921018e3493da58940023ea8-flexvolume-dir\") pod \"kube-controller-manager-srv-xtjva.gb1.brightbox.com\" (UID: \"b298e803921018e3493da58940023ea8\") " pod="kube-system/kube-controller-manager-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:45.831653 kubelet[2172]: I1029 04:53:45.831611 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b298e803921018e3493da58940023ea8-k8s-certs\") pod \"kube-controller-manager-srv-xtjva.gb1.brightbox.com\" (UID: \"b298e803921018e3493da58940023ea8\") " pod="kube-system/kube-controller-manager-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:45.831900 kubelet[2172]: I1029 04:53:45.831661 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b298e803921018e3493da58940023ea8-kubeconfig\") pod \"kube-controller-manager-srv-xtjva.gb1.brightbox.com\" (UID: \"b298e803921018e3493da58940023ea8\") " pod="kube-system/kube-controller-manager-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:45.831900 kubelet[2172]: I1029 04:53:45.831707 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee2b7c8cca7f94942e4d14e94ebd62a8-kubeconfig\") pod \"kube-scheduler-srv-xtjva.gb1.brightbox.com\" (UID: \"ee2b7c8cca7f94942e4d14e94ebd62a8\") " pod="kube-system/kube-scheduler-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:46.350567 kubelet[2172]: I1029 04:53:46.350516 2172 apiserver.go:52] "Watching apiserver" Oct 29 04:53:46.429550 kubelet[2172]: I1029 04:53:46.429488 2172 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 29 04:53:46.530290 kubelet[2172]: I1029 04:53:46.529186 2172 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:46.530574 kubelet[2172]: I1029 04:53:46.530531 2172 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:46.541178 kubelet[2172]: W1029 04:53:46.541140 2172 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 29 04:53:46.541570 kubelet[2172]: E1029 04:53:46.541525 2172 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-xtjva.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:46.541914 kubelet[2172]: W1029 04:53:46.541890 2172 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 29 04:53:46.542072 kubelet[2172]: E1029 04:53:46.542045 2172 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-xtjva.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-xtjva.gb1.brightbox.com" Oct 29 04:53:46.647497 kubelet[2172]: I1029 04:53:46.647243 2172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-xtjva.gb1.brightbox.com" podStartSLOduration=1.647211983 podStartE2EDuration="1.647211983s" podCreationTimestamp="2025-10-29 04:53:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 04:53:46.643771724 +0000 UTC m=+1.553807721" watchObservedRunningTime="2025-10-29 04:53:46.647211983 +0000 UTC m=+1.557247970" Oct 29 04:53:46.690549 kubelet[2172]: I1029 04:53:46.690453 2172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-xtjva.gb1.brightbox.com" podStartSLOduration=1.690429572 podStartE2EDuration="1.690429572s" podCreationTimestamp="2025-10-29 04:53:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 04:53:46.68955397 +0000 UTC m=+1.599589980" watchObservedRunningTime="2025-10-29 04:53:46.690429572 +0000 UTC m=+1.600465559" Oct 29 04:53:46.691058 kubelet[2172]: I1029 04:53:46.691000 2172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-xtjva.gb1.brightbox.com" podStartSLOduration=1.690989766 podStartE2EDuration="1.690989766s" podCreationTimestamp="2025-10-29 04:53:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 04:53:46.67941634 +0000 UTC m=+1.589452340" watchObservedRunningTime="2025-10-29 04:53:46.690989766 +0000 UTC m=+1.601025760" Oct 29 04:53:47.500984 kubelet[2172]: I1029 04:53:47.500927 2172 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 29 04:53:47.502424 env[1306]: time="2025-10-29T04:53:47.502314410Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 29 04:53:47.502936 kubelet[2172]: I1029 04:53:47.502716 2172 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 29 04:53:48.551297 kubelet[2172]: I1029 04:53:48.551252 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/19a0db1c-6384-42ec-9cc3-99081f26d9a9-kube-proxy\") pod \"kube-proxy-jcclw\" (UID: \"19a0db1c-6384-42ec-9cc3-99081f26d9a9\") " pod="kube-system/kube-proxy-jcclw" Oct 29 04:53:48.552524 kubelet[2172]: I1029 04:53:48.552496 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19a0db1c-6384-42ec-9cc3-99081f26d9a9-lib-modules\") pod \"kube-proxy-jcclw\" (UID: \"19a0db1c-6384-42ec-9cc3-99081f26d9a9\") " pod="kube-system/kube-proxy-jcclw" Oct 29 04:53:48.552750 kubelet[2172]: I1029 04:53:48.552723 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19a0db1c-6384-42ec-9cc3-99081f26d9a9-xtables-lock\") pod \"kube-proxy-jcclw\" (UID: \"19a0db1c-6384-42ec-9cc3-99081f26d9a9\") " pod="kube-system/kube-proxy-jcclw" Oct 29 04:53:48.552934 kubelet[2172]: I1029 04:53:48.552905 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2rd9\" (UniqueName: \"kubernetes.io/projected/19a0db1c-6384-42ec-9cc3-99081f26d9a9-kube-api-access-t2rd9\") pod \"kube-proxy-jcclw\" (UID: \"19a0db1c-6384-42ec-9cc3-99081f26d9a9\") " pod="kube-system/kube-proxy-jcclw" Oct 29 04:53:48.675203 kubelet[2172]: I1029 04:53:48.675143 2172 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 29 04:53:48.754491 kubelet[2172]: I1029 04:53:48.754434 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/699cbdd3-a53b-40cb-83f0-97a780ec60f9-var-lib-calico\") pod \"tigera-operator-7dcd859c48-mw6vz\" (UID: \"699cbdd3-a53b-40cb-83f0-97a780ec60f9\") " pod="tigera-operator/tigera-operator-7dcd859c48-mw6vz" Oct 29 04:53:48.754885 kubelet[2172]: I1029 04:53:48.754852 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdddb\" (UniqueName: \"kubernetes.io/projected/699cbdd3-a53b-40cb-83f0-97a780ec60f9-kube-api-access-cdddb\") pod \"tigera-operator-7dcd859c48-mw6vz\" (UID: \"699cbdd3-a53b-40cb-83f0-97a780ec60f9\") " pod="tigera-operator/tigera-operator-7dcd859c48-mw6vz" Oct 29 04:53:48.767800 env[1306]: time="2025-10-29T04:53:48.767723687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jcclw,Uid:19a0db1c-6384-42ec-9cc3-99081f26d9a9,Namespace:kube-system,Attempt:0,}" Oct 29 04:53:48.799055 env[1306]: time="2025-10-29T04:53:48.798904564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 04:53:48.799306 env[1306]: time="2025-10-29T04:53:48.799036852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 04:53:48.799306 env[1306]: time="2025-10-29T04:53:48.799058157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 04:53:48.799656 env[1306]: time="2025-10-29T04:53:48.799602588Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/660a7f517726206b0c9a0f65ee2de5459069ef8aef2b4549683628e84e64c2c7 pid=2225 runtime=io.containerd.runc.v2 Oct 29 04:53:48.887997 env[1306]: time="2025-10-29T04:53:48.887428114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jcclw,Uid:19a0db1c-6384-42ec-9cc3-99081f26d9a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"660a7f517726206b0c9a0f65ee2de5459069ef8aef2b4549683628e84e64c2c7\"" Oct 29 04:53:48.895351 env[1306]: time="2025-10-29T04:53:48.895281854Z" level=info msg="CreateContainer within sandbox \"660a7f517726206b0c9a0f65ee2de5459069ef8aef2b4549683628e84e64c2c7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 29 04:53:48.918425 env[1306]: time="2025-10-29T04:53:48.918311738Z" level=info msg="CreateContainer within sandbox \"660a7f517726206b0c9a0f65ee2de5459069ef8aef2b4549683628e84e64c2c7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0557e2d9a1244ec357ab451c452dbe12d6d43c4d59bce75ea7ea57db24351000\"" Oct 29 04:53:48.920524 env[1306]: time="2025-10-29T04:53:48.920436415Z" level=info msg="StartContainer for \"0557e2d9a1244ec357ab451c452dbe12d6d43c4d59bce75ea7ea57db24351000\"" Oct 29 04:53:49.003134 env[1306]: time="2025-10-29T04:53:49.003065946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mw6vz,Uid:699cbdd3-a53b-40cb-83f0-97a780ec60f9,Namespace:tigera-operator,Attempt:0,}" Oct 29 04:53:49.019204 env[1306]: time="2025-10-29T04:53:49.019140298Z" level=info msg="StartContainer for \"0557e2d9a1244ec357ab451c452dbe12d6d43c4d59bce75ea7ea57db24351000\" returns successfully" Oct 29 04:53:49.029959 env[1306]: time="2025-10-29T04:53:49.029848594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 04:53:49.030226 env[1306]: time="2025-10-29T04:53:49.029934924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 04:53:49.030226 env[1306]: time="2025-10-29T04:53:49.029953779Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 04:53:49.030458 env[1306]: time="2025-10-29T04:53:49.030258858Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/59fb0a9b7cbed2c3034fd5fdac54aa4e653759ffdb941d2e5d1c1cf50fb993e5 pid=2295 runtime=io.containerd.runc.v2 Oct 29 04:53:49.135824 env[1306]: time="2025-10-29T04:53:49.135743210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mw6vz,Uid:699cbdd3-a53b-40cb-83f0-97a780ec60f9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"59fb0a9b7cbed2c3034fd5fdac54aa4e653759ffdb941d2e5d1c1cf50fb993e5\"" Oct 29 04:53:49.146124 env[1306]: time="2025-10-29T04:53:49.143064091Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 29 04:53:49.510000 audit[2367]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2367 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.516271 kernel: kauditd_printk_skb: 4 callbacks suppressed Oct 29 04:53:49.516406 kernel: audit: type=1325 audit(1761713629.510:241): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2367 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.510000 audit[2369]: NETFILTER_CFG table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2369 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.510000 audit[2369]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6e7162b0 a2=0 a3=7ffe6e71629c items=0 ppid=2278 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.531930 kernel: audit: type=1325 audit(1761713629.510:242): table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2369 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.532041 kernel: audit: type=1300 audit(1761713629.510:242): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6e7162b0 a2=0 a3=7ffe6e71629c items=0 ppid=2278 pid=2369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.510000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 29 04:53:49.539401 kernel: audit: type=1327 audit(1761713629.510:242): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 29 04:53:49.510000 audit[2367]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeaa809440 a2=0 a3=7ffeaa80942c items=0 ppid=2278 pid=2367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.549813 kernel: audit: type=1300 audit(1761713629.510:241): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeaa809440 a2=0 a3=7ffeaa80942c items=0 ppid=2278 pid=2367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.510000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 29 04:53:49.559449 kernel: audit: type=1327 audit(1761713629.510:241): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 29 04:53:49.560000 audit[2370]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2370 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.560000 audit[2370]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd72638a0 a2=0 a3=7ffcd726388c items=0 ppid=2278 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.573452 kernel: audit: type=1325 audit(1761713629.560:243): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2370 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.573567 kernel: audit: type=1300 audit(1761713629.560:243): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd72638a0 a2=0 a3=7ffcd726388c items=0 ppid=2278 pid=2370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.564000 audit[2371]: NETFILTER_CFG table=nat:41 family=2 entries=1 op=nft_register_chain pid=2371 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.580400 kernel: audit: type=1325 audit(1761713629.564:244): table=nat:41 family=2 entries=1 op=nft_register_chain pid=2371 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.564000 audit[2371]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe536ef4f0 a2=0 a3=7ffe536ef4dc items=0 ppid=2278 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.564000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 29 04:53:49.589615 kernel: audit: type=1300 audit(1761713629.564:244): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe536ef4f0 a2=0 a3=7ffe536ef4dc items=0 ppid=2278 pid=2371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.560000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 29 04:53:49.589000 audit[2372]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_chain pid=2372 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.589000 audit[2372]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe97608bd0 a2=0 a3=7ffe97608bbc items=0 ppid=2278 pid=2372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.589000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 29 04:53:49.592000 audit[2373]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2373 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.592000 audit[2373]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc69298200 a2=0 a3=7ffc692981ec items=0 ppid=2278 pid=2373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.592000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 29 04:53:49.636000 audit[2374]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2374 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.636000 audit[2374]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe7cae2f70 a2=0 a3=7ffe7cae2f5c items=0 ppid=2278 pid=2374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.636000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 29 04:53:49.644000 audit[2376]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2376 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.644000 audit[2376]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffea39eda20 a2=0 a3=7ffea39eda0c items=0 ppid=2278 pid=2376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.644000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 29 04:53:49.653000 audit[2379]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2379 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.653000 audit[2379]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffdecddc430 a2=0 a3=7ffdecddc41c items=0 ppid=2278 pid=2379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.653000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 29 04:53:49.656000 audit[2380]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2380 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.656000 audit[2380]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff56d55420 a2=0 a3=7fff56d5540c items=0 ppid=2278 pid=2380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.656000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 29 04:53:49.660000 audit[2382]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2382 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.660000 audit[2382]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffee376ab10 a2=0 a3=7ffee376aafc items=0 ppid=2278 pid=2382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.660000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 29 04:53:49.662000 audit[2383]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2383 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.662000 audit[2383]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe20527700 a2=0 a3=7ffe205276ec items=0 ppid=2278 pid=2383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.662000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 29 04:53:49.667000 audit[2385]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2385 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.667000 audit[2385]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcb9ac4590 a2=0 a3=7ffcb9ac457c items=0 ppid=2278 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.667000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 29 04:53:49.673000 audit[2388]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2388 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.673000 audit[2388]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffccc5a140 a2=0 a3=7fffccc5a12c items=0 ppid=2278 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.673000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 29 04:53:49.676000 audit[2389]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2389 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.676000 audit[2389]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde296e520 a2=0 a3=7ffde296e50c items=0 ppid=2278 pid=2389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.676000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 29 04:53:49.681000 audit[2391]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2391 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.681000 audit[2391]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe37312890 a2=0 a3=7ffe3731287c items=0 ppid=2278 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.681000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 29 04:53:49.684000 audit[2392]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2392 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.684000 audit[2392]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd8e7248d0 a2=0 a3=7ffd8e7248bc items=0 ppid=2278 pid=2392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.684000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 29 04:53:49.694000 audit[2394]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2394 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.694000 audit[2394]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeddcc9db0 a2=0 a3=7ffeddcc9d9c items=0 ppid=2278 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.694000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 29 04:53:49.705000 audit[2397]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2397 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.705000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe834716b0 a2=0 a3=7ffe8347169c items=0 ppid=2278 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.705000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 29 04:53:49.712000 audit[2400]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2400 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.712000 audit[2400]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe5201e9d0 a2=0 a3=7ffe5201e9bc items=0 ppid=2278 pid=2400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.712000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 29 04:53:49.714000 audit[2401]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2401 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.714000 audit[2401]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffbd6b2230 a2=0 a3=7fffbd6b221c items=0 ppid=2278 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.714000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 29 04:53:49.719000 audit[2403]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2403 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.719000 audit[2403]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc67de5f40 a2=0 a3=7ffc67de5f2c items=0 ppid=2278 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.719000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 29 04:53:49.725000 audit[2406]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2406 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.725000 audit[2406]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe8311c340 a2=0 a3=7ffe8311c32c items=0 ppid=2278 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.725000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 29 04:53:49.727000 audit[2407]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2407 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.727000 audit[2407]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca525b4e0 a2=0 a3=7ffca525b4cc items=0 ppid=2278 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.727000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 29 04:53:49.731000 audit[2409]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2409 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 29 04:53:49.731000 audit[2409]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe47503080 a2=0 a3=7ffe4750306c items=0 ppid=2278 pid=2409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.731000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 29 04:53:49.777000 audit[2415]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2415 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:53:49.777000 audit[2415]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd57490de0 a2=0 a3=7ffd57490dcc items=0 ppid=2278 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.777000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:53:49.795000 audit[2415]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2415 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:53:49.795000 audit[2415]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd57490de0 a2=0 a3=7ffd57490dcc items=0 ppid=2278 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.795000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:53:49.801000 audit[2420]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2420 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.801000 audit[2420]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff0a1d19f0 a2=0 a3=7fff0a1d19dc items=0 ppid=2278 pid=2420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.801000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 29 04:53:49.811796 kubelet[2172]: I1029 04:53:49.811688 2172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jcclw" podStartSLOduration=1.8116657520000001 podStartE2EDuration="1.811665752s" podCreationTimestamp="2025-10-29 04:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 04:53:49.575879708 +0000 UTC m=+4.485915715" watchObservedRunningTime="2025-10-29 04:53:49.811665752 +0000 UTC m=+4.721701746" Oct 29 04:53:49.812000 audit[2422]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2422 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.812000 audit[2422]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff8e47e8e0 a2=0 a3=7fff8e47e8cc items=0 ppid=2278 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.812000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 29 04:53:49.818000 audit[2425]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2425 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.818000 audit[2425]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd6198df40 a2=0 a3=7ffd6198df2c items=0 ppid=2278 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.818000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 29 04:53:49.820000 audit[2426]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2426 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.820000 audit[2426]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff69de4b10 a2=0 a3=7fff69de4afc items=0 ppid=2278 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.820000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 29 04:53:49.824000 audit[2428]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2428 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.824000 audit[2428]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffddca8ebd0 a2=0 a3=7ffddca8ebbc items=0 ppid=2278 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.824000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 29 04:53:49.825000 audit[2429]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2429 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.825000 audit[2429]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcd2638350 a2=0 a3=7ffcd263833c items=0 ppid=2278 pid=2429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.825000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 29 04:53:49.829000 audit[2431]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2431 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.829000 audit[2431]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffecb388040 a2=0 a3=7ffecb38802c items=0 ppid=2278 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.829000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 29 04:53:49.835000 audit[2434]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2434 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.835000 audit[2434]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff7b83c8d0 a2=0 a3=7fff7b83c8bc items=0 ppid=2278 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.835000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 29 04:53:49.838000 audit[2435]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2435 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.838000 audit[2435]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff392f88b0 a2=0 a3=7fff392f889c items=0 ppid=2278 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.838000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 29 04:53:49.841000 audit[2437]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2437 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.841000 audit[2437]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff6388cff0 a2=0 a3=7fff6388cfdc items=0 ppid=2278 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.841000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 29 04:53:49.843000 audit[2438]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2438 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.843000 audit[2438]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca9922d80 a2=0 a3=7ffca9922d6c items=0 ppid=2278 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.843000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 29 04:53:49.847000 audit[2440]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2440 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.847000 audit[2440]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff1b06ba80 a2=0 a3=7fff1b06ba6c items=0 ppid=2278 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.847000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 29 04:53:49.853000 audit[2443]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2443 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.853000 audit[2443]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe927f8ef0 a2=0 a3=7ffe927f8edc items=0 ppid=2278 pid=2443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.853000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 29 04:53:49.859000 audit[2446]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2446 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.859000 audit[2446]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdeed7ee80 a2=0 a3=7ffdeed7ee6c items=0 ppid=2278 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.859000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 29 04:53:49.862000 audit[2447]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2447 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.862000 audit[2447]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdede84750 a2=0 a3=7ffdede8473c items=0 ppid=2278 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.862000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 29 04:53:49.865000 audit[2449]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2449 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.865000 audit[2449]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fffd97e9c70 a2=0 a3=7fffd97e9c5c items=0 ppid=2278 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.865000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 29 04:53:49.870000 audit[2452]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.870000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffcf42b7820 a2=0 a3=7ffcf42b780c items=0 ppid=2278 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.870000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 29 04:53:49.872000 audit[2453]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2453 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.872000 audit[2453]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc5541d070 a2=0 a3=7ffc5541d05c items=0 ppid=2278 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.872000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 29 04:53:49.876000 audit[2455]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.876000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd7f6d3f20 a2=0 a3=7ffd7f6d3f0c items=0 ppid=2278 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.876000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 29 04:53:49.878000 audit[2456]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.878000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce0ab4820 a2=0 a3=7ffce0ab480c items=0 ppid=2278 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.878000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 29 04:53:49.882000 audit[2458]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2458 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.882000 audit[2458]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcf303b6c0 a2=0 a3=7ffcf303b6ac items=0 ppid=2278 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.882000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 29 04:53:49.887000 audit[2461]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2461 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 29 04:53:49.887000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe62a7d860 a2=0 a3=7ffe62a7d84c items=0 ppid=2278 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.887000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 29 04:53:49.892000 audit[2463]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2463 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 29 04:53:49.892000 audit[2463]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffc1cd40410 a2=0 a3=7ffc1cd403fc items=0 ppid=2278 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.892000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:53:49.893000 audit[2463]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2463 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 29 04:53:49.893000 audit[2463]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc1cd40410 a2=0 a3=7ffc1cd403fc items=0 ppid=2278 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:53:49.893000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:53:51.326490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4059601862.mount: Deactivated successfully. Oct 29 04:53:52.772925 env[1306]: time="2025-10-29T04:53:52.772856892Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:52.774949 env[1306]: time="2025-10-29T04:53:52.774902547Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:52.777448 env[1306]: time="2025-10-29T04:53:52.777409877Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:52.779359 env[1306]: time="2025-10-29T04:53:52.779323470Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:53:52.780573 env[1306]: time="2025-10-29T04:53:52.780531328Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 29 04:53:52.792594 env[1306]: time="2025-10-29T04:53:52.792525828Z" level=info msg="CreateContainer within sandbox \"59fb0a9b7cbed2c3034fd5fdac54aa4e653759ffdb941d2e5d1c1cf50fb993e5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 29 04:53:52.804536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount910782984.mount: Deactivated successfully. Oct 29 04:53:52.816215 env[1306]: time="2025-10-29T04:53:52.816153179Z" level=info msg="CreateContainer within sandbox \"59fb0a9b7cbed2c3034fd5fdac54aa4e653759ffdb941d2e5d1c1cf50fb993e5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"41f2f8d4d1425d78d852e3b66f948d02bed66b42ad0939fb12619c873bc53623\"" Oct 29 04:53:52.819170 env[1306]: time="2025-10-29T04:53:52.819133065Z" level=info msg="StartContainer for \"41f2f8d4d1425d78d852e3b66f948d02bed66b42ad0939fb12619c873bc53623\"" Oct 29 04:53:53.065471 env[1306]: time="2025-10-29T04:53:53.065311347Z" level=info msg="StartContainer for \"41f2f8d4d1425d78d852e3b66f948d02bed66b42ad0939fb12619c873bc53623\" returns successfully" Oct 29 04:53:53.801239 systemd[1]: run-containerd-runc-k8s.io-41f2f8d4d1425d78d852e3b66f948d02bed66b42ad0939fb12619c873bc53623-runc.uyj8O8.mount: Deactivated successfully. Oct 29 04:53:53.981163 kubelet[2172]: I1029 04:53:53.981070 2172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-mw6vz" podStartSLOduration=2.340045956 podStartE2EDuration="5.981021221s" podCreationTimestamp="2025-10-29 04:53:48 +0000 UTC" firstStartedPulling="2025-10-29 04:53:49.141757362 +0000 UTC m=+4.051793344" lastFinishedPulling="2025-10-29 04:53:52.782732621 +0000 UTC m=+7.692768609" observedRunningTime="2025-10-29 04:53:53.579724525 +0000 UTC m=+8.489760524" watchObservedRunningTime="2025-10-29 04:53:53.981021221 +0000 UTC m=+8.891057225" Oct 29 04:54:00.381612 sudo[1495]: pam_unix(sudo:session): session closed for user root Oct 29 04:54:00.392429 kernel: kauditd_printk_skb: 143 callbacks suppressed Oct 29 04:54:00.392634 kernel: audit: type=1106 audit(1761713640.381:292): pid=1495 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 29 04:54:00.381000 audit[1495]: USER_END pid=1495 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 29 04:54:00.381000 audit[1495]: CRED_DISP pid=1495 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 29 04:54:00.400401 kernel: audit: type=1104 audit(1761713640.381:293): pid=1495 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 29 04:54:00.545816 sshd[1491]: pam_unix(sshd:session): session closed for user core Oct 29 04:54:00.552000 audit[1491]: USER_END pid=1491 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:54:00.561969 kernel: audit: type=1106 audit(1761713640.552:294): pid=1491 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:54:00.560000 audit[1491]: CRED_DISP pid=1491 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:54:00.566299 systemd[1]: sshd@6-10.230.24.246:22-147.75.109.163:53498.service: Deactivated successfully. Oct 29 04:54:00.567963 systemd[1]: session-7.scope: Deactivated successfully. Oct 29 04:54:00.571433 kernel: audit: type=1104 audit(1761713640.560:295): pid=1491 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:54:00.571529 kernel: audit: type=1131 audit(1761713640.564:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.230.24.246:22-147.75.109.163:53498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:54:00.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.230.24.246:22-147.75.109.163:53498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:54:00.577394 systemd-logind[1290]: Session 7 logged out. Waiting for processes to exit. Oct 29 04:54:00.579586 systemd-logind[1290]: Removed session 7. Oct 29 04:54:01.520000 audit[2546]: NETFILTER_CFG table=filter:89 family=2 entries=14 op=nft_register_rule pid=2546 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:01.532412 kernel: audit: type=1325 audit(1761713641.520:297): table=filter:89 family=2 entries=14 op=nft_register_rule pid=2546 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:01.520000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe56138880 a2=0 a3=7ffe5613886c items=0 ppid=2278 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:01.555408 kernel: audit: type=1300 audit(1761713641.520:297): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe56138880 a2=0 a3=7ffe5613886c items=0 ppid=2278 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:01.520000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:01.570394 kernel: audit: type=1327 audit(1761713641.520:297): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:01.559000 audit[2546]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2546 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:01.576431 kernel: audit: type=1325 audit(1761713641.559:298): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2546 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:01.559000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe56138880 a2=0 a3=0 items=0 ppid=2278 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:01.587401 kernel: audit: type=1300 audit(1761713641.559:298): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe56138880 a2=0 a3=0 items=0 ppid=2278 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:01.559000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:01.619000 audit[2548]: NETFILTER_CFG table=filter:91 family=2 entries=15 op=nft_register_rule pid=2548 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:01.619000 audit[2548]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffefe4e83a0 a2=0 a3=7ffefe4e838c items=0 ppid=2278 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:01.619000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:01.624000 audit[2548]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2548 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:01.624000 audit[2548]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffefe4e83a0 a2=0 a3=0 items=0 ppid=2278 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:01.624000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:04.625000 audit[2551]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=2551 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:04.625000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffd04dbd20 a2=0 a3=7fffd04dbd0c items=0 ppid=2278 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:04.625000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:04.629000 audit[2551]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2551 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:04.629000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffd04dbd20 a2=0 a3=0 items=0 ppid=2278 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:04.629000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:04.662000 audit[2553]: NETFILTER_CFG table=filter:95 family=2 entries=17 op=nft_register_rule pid=2553 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:04.662000 audit[2553]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe057a6460 a2=0 a3=7ffe057a644c items=0 ppid=2278 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:04.662000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:04.670000 audit[2553]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2553 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:04.670000 audit[2553]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe057a6460 a2=0 a3=0 items=0 ppid=2278 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:04.670000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:05.685000 audit[2555]: NETFILTER_CFG table=filter:97 family=2 entries=19 op=nft_register_rule pid=2555 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:05.691829 kernel: kauditd_printk_skb: 19 callbacks suppressed Oct 29 04:54:05.692116 kernel: audit: type=1325 audit(1761713645.685:305): table=filter:97 family=2 entries=19 op=nft_register_rule pid=2555 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:05.685000 audit[2555]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd713e8700 a2=0 a3=7ffd713e86ec items=0 ppid=2278 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:05.685000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:05.708301 kernel: audit: type=1300 audit(1761713645.685:305): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd713e8700 a2=0 a3=7ffd713e86ec items=0 ppid=2278 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:05.708552 kernel: audit: type=1327 audit(1761713645.685:305): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:05.717000 audit[2555]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2555 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:05.722417 kernel: audit: type=1325 audit(1761713645.717:306): table=nat:98 family=2 entries=12 op=nft_register_rule pid=2555 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:05.717000 audit[2555]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd713e8700 a2=0 a3=0 items=0 ppid=2278 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:05.731427 kernel: audit: type=1300 audit(1761713645.717:306): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd713e8700 a2=0 a3=0 items=0 ppid=2278 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:05.717000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:05.740294 kernel: audit: type=1327 audit(1761713645.717:306): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:07.364000 audit[2557]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=2557 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:07.372427 kernel: audit: type=1325 audit(1761713647.364:307): table=filter:99 family=2 entries=21 op=nft_register_rule pid=2557 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:07.364000 audit[2557]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff66cbd6e0 a2=0 a3=7fff66cbd6cc items=0 ppid=2278 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:07.382520 kernel: audit: type=1300 audit(1761713647.364:307): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff66cbd6e0 a2=0 a3=7fff66cbd6cc items=0 ppid=2278 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:07.364000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:07.380000 audit[2557]: NETFILTER_CFG table=nat:100 family=2 entries=12 op=nft_register_rule pid=2557 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:07.391299 kernel: audit: type=1327 audit(1761713647.364:307): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:07.391428 kernel: audit: type=1325 audit(1761713647.380:308): table=nat:100 family=2 entries=12 op=nft_register_rule pid=2557 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:07.380000 audit[2557]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff66cbd6e0 a2=0 a3=0 items=0 ppid=2278 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:07.380000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:07.471794 kubelet[2172]: I1029 04:54:07.471736 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e6a6845-c965-4e8b-ba1d-87c4349fc267-tigera-ca-bundle\") pod \"calico-typha-56d86bb6f8-dk2lt\" (UID: \"9e6a6845-c965-4e8b-ba1d-87c4349fc267\") " pod="calico-system/calico-typha-56d86bb6f8-dk2lt" Oct 29 04:54:07.472777 kubelet[2172]: I1029 04:54:07.472747 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9e6a6845-c965-4e8b-ba1d-87c4349fc267-typha-certs\") pod \"calico-typha-56d86bb6f8-dk2lt\" (UID: \"9e6a6845-c965-4e8b-ba1d-87c4349fc267\") " pod="calico-system/calico-typha-56d86bb6f8-dk2lt" Oct 29 04:54:07.473029 kubelet[2172]: I1029 04:54:07.472996 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvdgp\" (UniqueName: \"kubernetes.io/projected/9e6a6845-c965-4e8b-ba1d-87c4349fc267-kube-api-access-pvdgp\") pod \"calico-typha-56d86bb6f8-dk2lt\" (UID: \"9e6a6845-c965-4e8b-ba1d-87c4349fc267\") " pod="calico-system/calico-typha-56d86bb6f8-dk2lt" Oct 29 04:54:07.574291 kubelet[2172]: I1029 04:54:07.574226 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/70448abd-55db-419e-b8b2-97c21f91c915-cni-log-dir\") pod \"calico-node-4znh4\" (UID: \"70448abd-55db-419e-b8b2-97c21f91c915\") " pod="calico-system/calico-node-4znh4" Oct 29 04:54:07.574291 kubelet[2172]: I1029 04:54:07.574295 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/70448abd-55db-419e-b8b2-97c21f91c915-cni-net-dir\") pod \"calico-node-4znh4\" (UID: \"70448abd-55db-419e-b8b2-97c21f91c915\") " pod="calico-system/calico-node-4znh4" Oct 29 04:54:07.574576 kubelet[2172]: I1029 04:54:07.574331 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70448abd-55db-419e-b8b2-97c21f91c915-lib-modules\") pod \"calico-node-4znh4\" (UID: \"70448abd-55db-419e-b8b2-97c21f91c915\") " pod="calico-system/calico-node-4znh4" Oct 29 04:54:07.574576 kubelet[2172]: I1029 04:54:07.574361 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/70448abd-55db-419e-b8b2-97c21f91c915-var-lib-calico\") pod \"calico-node-4znh4\" (UID: \"70448abd-55db-419e-b8b2-97c21f91c915\") " pod="calico-system/calico-node-4znh4" Oct 29 04:54:07.574576 kubelet[2172]: I1029 04:54:07.574411 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/70448abd-55db-419e-b8b2-97c21f91c915-var-run-calico\") pod \"calico-node-4znh4\" (UID: \"70448abd-55db-419e-b8b2-97c21f91c915\") " pod="calico-system/calico-node-4znh4" Oct 29 04:54:07.574576 kubelet[2172]: I1029 04:54:07.574439 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn56w\" (UniqueName: \"kubernetes.io/projected/70448abd-55db-419e-b8b2-97c21f91c915-kube-api-access-gn56w\") pod \"calico-node-4znh4\" (UID: \"70448abd-55db-419e-b8b2-97c21f91c915\") " pod="calico-system/calico-node-4znh4" Oct 29 04:54:07.574576 kubelet[2172]: I1029 04:54:07.574487 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/70448abd-55db-419e-b8b2-97c21f91c915-cni-bin-dir\") pod \"calico-node-4znh4\" (UID: \"70448abd-55db-419e-b8b2-97c21f91c915\") " pod="calico-system/calico-node-4znh4" Oct 29 04:54:07.574873 kubelet[2172]: I1029 04:54:07.574516 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/70448abd-55db-419e-b8b2-97c21f91c915-policysync\") pod \"calico-node-4znh4\" (UID: \"70448abd-55db-419e-b8b2-97c21f91c915\") " pod="calico-system/calico-node-4znh4" Oct 29 04:54:07.574873 kubelet[2172]: I1029 04:54:07.574552 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70448abd-55db-419e-b8b2-97c21f91c915-xtables-lock\") pod \"calico-node-4znh4\" (UID: \"70448abd-55db-419e-b8b2-97c21f91c915\") " pod="calico-system/calico-node-4znh4" Oct 29 04:54:07.574873 kubelet[2172]: I1029 04:54:07.574594 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/70448abd-55db-419e-b8b2-97c21f91c915-flexvol-driver-host\") pod \"calico-node-4znh4\" (UID: \"70448abd-55db-419e-b8b2-97c21f91c915\") " pod="calico-system/calico-node-4znh4" Oct 29 04:54:07.574873 kubelet[2172]: I1029 04:54:07.574659 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70448abd-55db-419e-b8b2-97c21f91c915-tigera-ca-bundle\") pod \"calico-node-4znh4\" (UID: \"70448abd-55db-419e-b8b2-97c21f91c915\") " pod="calico-system/calico-node-4znh4" Oct 29 04:54:07.574873 kubelet[2172]: I1029 04:54:07.574720 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/70448abd-55db-419e-b8b2-97c21f91c915-node-certs\") pod \"calico-node-4znh4\" (UID: \"70448abd-55db-419e-b8b2-97c21f91c915\") " pod="calico-system/calico-node-4znh4" Oct 29 04:54:07.691866 kubelet[2172]: E1029 04:54:07.691815 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.692113 kubelet[2172]: W1029 04:54:07.692082 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.692992 kubelet[2172]: E1029 04:54:07.692961 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.702761 kubelet[2172]: E1029 04:54:07.702679 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.702761 kubelet[2172]: W1029 04:54:07.702705 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.702761 kubelet[2172]: E1029 04:54:07.702731 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.740369 env[1306]: time="2025-10-29T04:54:07.740241287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56d86bb6f8-dk2lt,Uid:9e6a6845-c965-4e8b-ba1d-87c4349fc267,Namespace:calico-system,Attempt:0,}" Oct 29 04:54:07.787002 kubelet[2172]: E1029 04:54:07.786695 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tptz2" podUID="de4b152a-29bb-4b0c-a12c-2eda92dd0564" Oct 29 04:54:07.796844 env[1306]: time="2025-10-29T04:54:07.792795989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 04:54:07.796844 env[1306]: time="2025-10-29T04:54:07.796254161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 04:54:07.796844 env[1306]: time="2025-10-29T04:54:07.796281414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 04:54:07.797224 env[1306]: time="2025-10-29T04:54:07.796923842Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/196c0b9976321ebf91533105791e81f06246c6e9cba5629851cb5b03f09232f6 pid=2570 runtime=io.containerd.runc.v2 Oct 29 04:54:07.874677 env[1306]: time="2025-10-29T04:54:07.873510402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4znh4,Uid:70448abd-55db-419e-b8b2-97c21f91c915,Namespace:calico-system,Attempt:0,}" Oct 29 04:54:07.874888 kubelet[2172]: E1029 04:54:07.873767 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.874888 kubelet[2172]: W1029 04:54:07.873805 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.874888 kubelet[2172]: E1029 04:54:07.873845 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.874888 kubelet[2172]: E1029 04:54:07.874126 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.874888 kubelet[2172]: W1029 04:54:07.874141 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.874888 kubelet[2172]: E1029 04:54:07.874158 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.874888 kubelet[2172]: E1029 04:54:07.874439 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.874888 kubelet[2172]: W1029 04:54:07.874454 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.874888 kubelet[2172]: E1029 04:54:07.874479 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.881157 kubelet[2172]: E1029 04:54:07.875468 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.881157 kubelet[2172]: W1029 04:54:07.875495 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.881157 kubelet[2172]: E1029 04:54:07.875515 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.881157 kubelet[2172]: E1029 04:54:07.880693 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.881157 kubelet[2172]: W1029 04:54:07.880712 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.881157 kubelet[2172]: E1029 04:54:07.880730 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.881157 kubelet[2172]: E1029 04:54:07.881019 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.881157 kubelet[2172]: W1029 04:54:07.881033 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.881157 kubelet[2172]: E1029 04:54:07.881049 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.882160 kubelet[2172]: E1029 04:54:07.881788 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.882160 kubelet[2172]: W1029 04:54:07.881807 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.882160 kubelet[2172]: E1029 04:54:07.881859 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.882604 kubelet[2172]: E1029 04:54:07.882433 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.882604 kubelet[2172]: W1029 04:54:07.882453 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.882604 kubelet[2172]: E1029 04:54:07.882470 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.883031 kubelet[2172]: E1029 04:54:07.883009 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.883166 kubelet[2172]: W1029 04:54:07.883140 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.883430 kubelet[2172]: E1029 04:54:07.883281 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.887462 kubelet[2172]: E1029 04:54:07.887438 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.887614 kubelet[2172]: W1029 04:54:07.887587 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.887765 kubelet[2172]: E1029 04:54:07.887734 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.888517 kubelet[2172]: E1029 04:54:07.888494 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.888659 kubelet[2172]: W1029 04:54:07.888633 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.888810 kubelet[2172]: E1029 04:54:07.888782 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.889239 kubelet[2172]: E1029 04:54:07.889217 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.889397 kubelet[2172]: W1029 04:54:07.889356 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.889529 kubelet[2172]: E1029 04:54:07.889504 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.889987 kubelet[2172]: E1029 04:54:07.889966 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.890126 kubelet[2172]: W1029 04:54:07.890101 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.890280 kubelet[2172]: E1029 04:54:07.890255 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.896646 kubelet[2172]: E1029 04:54:07.890682 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.897128 kubelet[2172]: W1029 04:54:07.896802 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.897128 kubelet[2172]: E1029 04:54:07.896856 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.897503 kubelet[2172]: E1029 04:54:07.897471 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.897644 kubelet[2172]: W1029 04:54:07.897618 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.897792 kubelet[2172]: E1029 04:54:07.897767 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.898204 kubelet[2172]: E1029 04:54:07.898182 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.898704 kubelet[2172]: W1029 04:54:07.898667 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.898866 kubelet[2172]: E1029 04:54:07.898840 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.902431 kubelet[2172]: E1029 04:54:07.899359 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.902431 kubelet[2172]: W1029 04:54:07.899400 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.902431 kubelet[2172]: E1029 04:54:07.899421 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.902611 kubelet[2172]: E1029 04:54:07.902485 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.902611 kubelet[2172]: W1029 04:54:07.902501 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.902611 kubelet[2172]: E1029 04:54:07.902521 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.906270 kubelet[2172]: E1029 04:54:07.902854 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.906270 kubelet[2172]: W1029 04:54:07.902869 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.906270 kubelet[2172]: E1029 04:54:07.902885 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.906270 kubelet[2172]: E1029 04:54:07.903142 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.906270 kubelet[2172]: W1029 04:54:07.903156 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.906270 kubelet[2172]: E1029 04:54:07.903172 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.906270 kubelet[2172]: E1029 04:54:07.903562 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.906270 kubelet[2172]: W1029 04:54:07.903577 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.906270 kubelet[2172]: E1029 04:54:07.903592 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.906771 kubelet[2172]: I1029 04:54:07.903621 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/de4b152a-29bb-4b0c-a12c-2eda92dd0564-socket-dir\") pod \"csi-node-driver-tptz2\" (UID: \"de4b152a-29bb-4b0c-a12c-2eda92dd0564\") " pod="calico-system/csi-node-driver-tptz2" Oct 29 04:54:07.906771 kubelet[2172]: E1029 04:54:07.903907 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.906771 kubelet[2172]: W1029 04:54:07.903924 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.906771 kubelet[2172]: E1029 04:54:07.903940 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.906771 kubelet[2172]: I1029 04:54:07.903975 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de4b152a-29bb-4b0c-a12c-2eda92dd0564-kubelet-dir\") pod \"csi-node-driver-tptz2\" (UID: \"de4b152a-29bb-4b0c-a12c-2eda92dd0564\") " pod="calico-system/csi-node-driver-tptz2" Oct 29 04:54:07.906771 kubelet[2172]: E1029 04:54:07.906593 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.906771 kubelet[2172]: W1029 04:54:07.906610 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.906771 kubelet[2172]: E1029 04:54:07.906628 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.907271 kubelet[2172]: I1029 04:54:07.906654 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/de4b152a-29bb-4b0c-a12c-2eda92dd0564-varrun\") pod \"csi-node-driver-tptz2\" (UID: \"de4b152a-29bb-4b0c-a12c-2eda92dd0564\") " pod="calico-system/csi-node-driver-tptz2" Oct 29 04:54:07.919076 kubelet[2172]: E1029 04:54:07.919030 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.919076 kubelet[2172]: W1029 04:54:07.919068 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.919287 kubelet[2172]: E1029 04:54:07.919098 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.919287 kubelet[2172]: I1029 04:54:07.919134 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdjn2\" (UniqueName: \"kubernetes.io/projected/de4b152a-29bb-4b0c-a12c-2eda92dd0564-kube-api-access-kdjn2\") pod \"csi-node-driver-tptz2\" (UID: \"de4b152a-29bb-4b0c-a12c-2eda92dd0564\") " pod="calico-system/csi-node-driver-tptz2" Oct 29 04:54:07.922683 kubelet[2172]: E1029 04:54:07.922649 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.922683 kubelet[2172]: W1029 04:54:07.922675 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.922962 kubelet[2172]: E1029 04:54:07.922851 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.922962 kubelet[2172]: I1029 04:54:07.922910 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/de4b152a-29bb-4b0c-a12c-2eda92dd0564-registration-dir\") pod \"csi-node-driver-tptz2\" (UID: \"de4b152a-29bb-4b0c-a12c-2eda92dd0564\") " pod="calico-system/csi-node-driver-tptz2" Oct 29 04:54:07.922962 kubelet[2172]: E1029 04:54:07.922937 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.922962 kubelet[2172]: W1029 04:54:07.922952 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.923227 kubelet[2172]: E1029 04:54:07.923072 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.923302 kubelet[2172]: E1029 04:54:07.923242 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.923302 kubelet[2172]: W1029 04:54:07.923256 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.923420 kubelet[2172]: E1029 04:54:07.923369 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.923589 kubelet[2172]: E1029 04:54:07.923558 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.923589 kubelet[2172]: W1029 04:54:07.923577 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.926477 kubelet[2172]: E1029 04:54:07.926441 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.926717 kubelet[2172]: E1029 04:54:07.926688 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.926717 kubelet[2172]: W1029 04:54:07.926711 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.926966 kubelet[2172]: E1029 04:54:07.926921 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.927045 kubelet[2172]: E1029 04:54:07.926979 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.927045 kubelet[2172]: W1029 04:54:07.926994 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.927188 kubelet[2172]: E1029 04:54:07.927119 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.932386 kubelet[2172]: E1029 04:54:07.932347 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.932663 kubelet[2172]: W1029 04:54:07.932627 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.932769 kubelet[2172]: E1029 04:54:07.932665 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.933021 kubelet[2172]: E1029 04:54:07.932997 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.933021 kubelet[2172]: W1029 04:54:07.933017 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.933151 kubelet[2172]: E1029 04:54:07.933036 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.933338 kubelet[2172]: E1029 04:54:07.933293 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.933338 kubelet[2172]: W1029 04:54:07.933307 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.933338 kubelet[2172]: E1029 04:54:07.933330 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.933665 kubelet[2172]: E1029 04:54:07.933642 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.933665 kubelet[2172]: W1029 04:54:07.933662 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.933821 kubelet[2172]: E1029 04:54:07.933679 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.939462 kubelet[2172]: E1029 04:54:07.936637 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:07.939462 kubelet[2172]: W1029 04:54:07.936661 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:07.939462 kubelet[2172]: E1029 04:54:07.936679 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:07.947476 env[1306]: time="2025-10-29T04:54:07.942509559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 04:54:07.947476 env[1306]: time="2025-10-29T04:54:07.942571580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 04:54:07.947476 env[1306]: time="2025-10-29T04:54:07.942589820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 04:54:07.950759 env[1306]: time="2025-10-29T04:54:07.948550178Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8528ad5c6863c70de6f51458dce99c6652ccf0708b0d4f89cf65ce6624f2b48b pid=2645 runtime=io.containerd.runc.v2 Oct 29 04:54:07.999601 env[1306]: time="2025-10-29T04:54:07.999535330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56d86bb6f8-dk2lt,Uid:9e6a6845-c965-4e8b-ba1d-87c4349fc267,Namespace:calico-system,Attempt:0,} returns sandbox id \"196c0b9976321ebf91533105791e81f06246c6e9cba5629851cb5b03f09232f6\"" Oct 29 04:54:08.002762 env[1306]: time="2025-10-29T04:54:08.002724468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 29 04:54:08.026692 kubelet[2172]: E1029 04:54:08.026627 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.026692 kubelet[2172]: W1029 04:54:08.026659 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.026692 kubelet[2172]: E1029 04:54:08.026699 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.027114 kubelet[2172]: E1029 04:54:08.027086 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.027114 kubelet[2172]: W1029 04:54:08.027107 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.027287 kubelet[2172]: E1029 04:54:08.027132 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.027479 kubelet[2172]: E1029 04:54:08.027456 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.027479 kubelet[2172]: W1029 04:54:08.027475 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.027608 kubelet[2172]: E1029 04:54:08.027542 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.027903 kubelet[2172]: E1029 04:54:08.027876 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.027903 kubelet[2172]: W1029 04:54:08.027897 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.028045 kubelet[2172]: E1029 04:54:08.027921 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.028216 kubelet[2172]: E1029 04:54:08.028176 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.028216 kubelet[2172]: W1029 04:54:08.028195 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.028357 kubelet[2172]: E1029 04:54:08.028316 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.028558 kubelet[2172]: E1029 04:54:08.028536 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.028558 kubelet[2172]: W1029 04:54:08.028556 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.028709 kubelet[2172]: E1029 04:54:08.028676 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.028890 kubelet[2172]: E1029 04:54:08.028864 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.028890 kubelet[2172]: W1029 04:54:08.028883 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.029050 kubelet[2172]: E1029 04:54:08.029015 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.029218 kubelet[2172]: E1029 04:54:08.029183 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.029218 kubelet[2172]: W1029 04:54:08.029202 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.030623 kubelet[2172]: E1029 04:54:08.030353 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.030780 kubelet[2172]: E1029 04:54:08.030721 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.030780 kubelet[2172]: W1029 04:54:08.030736 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.031198 kubelet[2172]: E1029 04:54:08.030951 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.031785 kubelet[2172]: E1029 04:54:08.031760 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.031785 kubelet[2172]: W1029 04:54:08.031781 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.032179 kubelet[2172]: E1029 04:54:08.031935 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.032330 kubelet[2172]: E1029 04:54:08.032295 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.032330 kubelet[2172]: W1029 04:54:08.032316 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.032649 kubelet[2172]: E1029 04:54:08.032494 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.032726 kubelet[2172]: E1029 04:54:08.032695 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.032726 kubelet[2172]: W1029 04:54:08.032710 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.032877 kubelet[2172]: E1029 04:54:08.032853 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.033144 kubelet[2172]: E1029 04:54:08.033120 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.033144 kubelet[2172]: W1029 04:54:08.033138 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.033286 kubelet[2172]: E1029 04:54:08.033257 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.033580 kubelet[2172]: E1029 04:54:08.033560 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.033580 kubelet[2172]: W1029 04:54:08.033578 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.033840 kubelet[2172]: E1029 04:54:08.033694 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.033933 kubelet[2172]: E1029 04:54:08.033866 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.033933 kubelet[2172]: W1029 04:54:08.033879 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.034222 kubelet[2172]: E1029 04:54:08.034074 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.034314 kubelet[2172]: E1029 04:54:08.034248 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.034314 kubelet[2172]: W1029 04:54:08.034262 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.034447 kubelet[2172]: E1029 04:54:08.034415 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.034728 kubelet[2172]: E1029 04:54:08.034685 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.034803 kubelet[2172]: W1029 04:54:08.034728 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.034897 kubelet[2172]: E1029 04:54:08.034862 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.035135 kubelet[2172]: E1029 04:54:08.035115 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.035135 kubelet[2172]: W1029 04:54:08.035133 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.035272 kubelet[2172]: E1029 04:54:08.035245 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.035535 kubelet[2172]: E1029 04:54:08.035515 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.035535 kubelet[2172]: W1029 04:54:08.035533 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.035784 kubelet[2172]: E1029 04:54:08.035656 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.035877 kubelet[2172]: E1029 04:54:08.035805 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.035877 kubelet[2172]: W1029 04:54:08.035819 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.036160 kubelet[2172]: E1029 04:54:08.036136 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.036409 kubelet[2172]: E1029 04:54:08.036305 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.036409 kubelet[2172]: W1029 04:54:08.036326 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.036731 kubelet[2172]: E1029 04:54:08.036566 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.036839 kubelet[2172]: E1029 04:54:08.036755 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.036839 kubelet[2172]: W1029 04:54:08.036769 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.036948 kubelet[2172]: E1029 04:54:08.036909 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.038792 kubelet[2172]: E1029 04:54:08.038463 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.038792 kubelet[2172]: W1029 04:54:08.038485 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.038792 kubelet[2172]: E1029 04:54:08.038761 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.038992 kubelet[2172]: E1029 04:54:08.038767 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.038992 kubelet[2172]: W1029 04:54:08.038813 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.039277 kubelet[2172]: E1029 04:54:08.039188 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.039277 kubelet[2172]: E1029 04:54:08.039250 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.039277 kubelet[2172]: W1029 04:54:08.039265 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.039791 kubelet[2172]: E1029 04:54:08.039281 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.049715 kubelet[2172]: E1029 04:54:08.049631 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:08.049715 kubelet[2172]: W1029 04:54:08.049651 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:08.049715 kubelet[2172]: E1029 04:54:08.049670 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:08.068427 env[1306]: time="2025-10-29T04:54:08.068302052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4znh4,Uid:70448abd-55db-419e-b8b2-97c21f91c915,Namespace:calico-system,Attempt:0,} returns sandbox id \"8528ad5c6863c70de6f51458dce99c6652ccf0708b0d4f89cf65ce6624f2b48b\"" Oct 29 04:54:08.416000 audit[2720]: NETFILTER_CFG table=filter:101 family=2 entries=22 op=nft_register_rule pid=2720 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:08.416000 audit[2720]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffce652d170 a2=0 a3=7ffce652d15c items=0 ppid=2278 pid=2720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:08.416000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:08.421000 audit[2720]: NETFILTER_CFG table=nat:102 family=2 entries=12 op=nft_register_rule pid=2720 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:08.421000 audit[2720]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffce652d170 a2=0 a3=0 items=0 ppid=2278 pid=2720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:08.421000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:09.491081 kubelet[2172]: E1029 04:54:09.490484 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tptz2" podUID="de4b152a-29bb-4b0c-a12c-2eda92dd0564" Oct 29 04:54:09.622058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2031338156.mount: Deactivated successfully. Oct 29 04:54:11.485707 kubelet[2172]: E1029 04:54:11.485638 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tptz2" podUID="de4b152a-29bb-4b0c-a12c-2eda92dd0564" Oct 29 04:54:11.601741 env[1306]: time="2025-10-29T04:54:11.601675193Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:54:11.604924 env[1306]: time="2025-10-29T04:54:11.604882032Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:54:11.606874 env[1306]: time="2025-10-29T04:54:11.606837584Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:54:11.608675 env[1306]: time="2025-10-29T04:54:11.608635540Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:54:11.609967 env[1306]: time="2025-10-29T04:54:11.609912556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 29 04:54:11.612335 env[1306]: time="2025-10-29T04:54:11.612165562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 29 04:54:11.646032 env[1306]: time="2025-10-29T04:54:11.645879407Z" level=info msg="CreateContainer within sandbox \"196c0b9976321ebf91533105791e81f06246c6e9cba5629851cb5b03f09232f6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 29 04:54:11.756139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1053946887.mount: Deactivated successfully. Oct 29 04:54:11.763391 env[1306]: time="2025-10-29T04:54:11.763304822Z" level=info msg="CreateContainer within sandbox \"196c0b9976321ebf91533105791e81f06246c6e9cba5629851cb5b03f09232f6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2f9f868ff68f2dcc4716bbb8abffc9a9aa3b38b6c8d518ac2e2d5ca3da821a07\"" Oct 29 04:54:11.765984 env[1306]: time="2025-10-29T04:54:11.765618241Z" level=info msg="StartContainer for \"2f9f868ff68f2dcc4716bbb8abffc9a9aa3b38b6c8d518ac2e2d5ca3da821a07\"" Oct 29 04:54:11.904424 env[1306]: time="2025-10-29T04:54:11.904343524Z" level=info msg="StartContainer for \"2f9f868ff68f2dcc4716bbb8abffc9a9aa3b38b6c8d518ac2e2d5ca3da821a07\" returns successfully" Oct 29 04:54:12.678013 kubelet[2172]: I1029 04:54:12.677918 2172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-56d86bb6f8-dk2lt" podStartSLOduration=2.067643466 podStartE2EDuration="5.677872392s" podCreationTimestamp="2025-10-29 04:54:07 +0000 UTC" firstStartedPulling="2025-10-29 04:54:08.001436664 +0000 UTC m=+22.911472647" lastFinishedPulling="2025-10-29 04:54:11.61166559 +0000 UTC m=+26.521701573" observedRunningTime="2025-10-29 04:54:12.676198709 +0000 UTC m=+27.586234709" watchObservedRunningTime="2025-10-29 04:54:12.677872392 +0000 UTC m=+27.587908387" Oct 29 04:54:12.741401 kubelet[2172]: E1029 04:54:12.741304 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.741401 kubelet[2172]: W1029 04:54:12.741359 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.741728 kubelet[2172]: E1029 04:54:12.741448 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.741830 kubelet[2172]: E1029 04:54:12.741805 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.741830 kubelet[2172]: W1029 04:54:12.741829 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.741986 kubelet[2172]: E1029 04:54:12.741847 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.742353 kubelet[2172]: E1029 04:54:12.742325 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.742353 kubelet[2172]: W1029 04:54:12.742347 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.742514 kubelet[2172]: E1029 04:54:12.742365 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.742731 kubelet[2172]: E1029 04:54:12.742703 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.742731 kubelet[2172]: W1029 04:54:12.742724 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.742870 kubelet[2172]: E1029 04:54:12.742742 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.743058 kubelet[2172]: E1029 04:54:12.743031 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.743058 kubelet[2172]: W1029 04:54:12.743053 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.743197 kubelet[2172]: E1029 04:54:12.743070 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.743338 kubelet[2172]: E1029 04:54:12.743310 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.743338 kubelet[2172]: W1029 04:54:12.743332 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.743338 kubelet[2172]: E1029 04:54:12.743348 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.743629 kubelet[2172]: E1029 04:54:12.743600 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.743629 kubelet[2172]: W1029 04:54:12.743622 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.743790 kubelet[2172]: E1029 04:54:12.743638 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.744785 kubelet[2172]: E1029 04:54:12.744440 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.744785 kubelet[2172]: W1029 04:54:12.744465 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.744785 kubelet[2172]: E1029 04:54:12.744483 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.744785 kubelet[2172]: E1029 04:54:12.744760 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.744785 kubelet[2172]: W1029 04:54:12.744774 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.744785 kubelet[2172]: E1029 04:54:12.744789 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.745559 kubelet[2172]: E1029 04:54:12.745104 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.745559 kubelet[2172]: W1029 04:54:12.745120 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.745559 kubelet[2172]: E1029 04:54:12.745135 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.746223 kubelet[2172]: E1029 04:54:12.746196 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.746223 kubelet[2172]: W1029 04:54:12.746219 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.746223 kubelet[2172]: E1029 04:54:12.746236 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.746583 kubelet[2172]: E1029 04:54:12.746518 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.746583 kubelet[2172]: W1029 04:54:12.746532 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.746583 kubelet[2172]: E1029 04:54:12.746547 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.746870 kubelet[2172]: E1029 04:54:12.746805 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.746870 kubelet[2172]: W1029 04:54:12.746828 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.746870 kubelet[2172]: E1029 04:54:12.746844 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.747729 kubelet[2172]: E1029 04:54:12.747696 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.747729 kubelet[2172]: W1029 04:54:12.747719 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.748118 kubelet[2172]: E1029 04:54:12.747736 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.748118 kubelet[2172]: E1029 04:54:12.747979 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.748118 kubelet[2172]: W1029 04:54:12.747994 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.748118 kubelet[2172]: E1029 04:54:12.748011 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.764979 kubelet[2172]: E1029 04:54:12.764911 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.764979 kubelet[2172]: W1029 04:54:12.764970 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.765260 kubelet[2172]: E1029 04:54:12.765007 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.765564 kubelet[2172]: E1029 04:54:12.765537 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.765564 kubelet[2172]: W1029 04:54:12.765558 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.765724 kubelet[2172]: E1029 04:54:12.765584 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.765979 kubelet[2172]: E1029 04:54:12.765947 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.765979 kubelet[2172]: W1029 04:54:12.765971 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.766117 kubelet[2172]: E1029 04:54:12.766006 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.766347 kubelet[2172]: E1029 04:54:12.766323 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.766347 kubelet[2172]: W1029 04:54:12.766344 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.766552 kubelet[2172]: E1029 04:54:12.766395 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.766749 kubelet[2172]: E1029 04:54:12.766725 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.766749 kubelet[2172]: W1029 04:54:12.766745 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.766893 kubelet[2172]: E1029 04:54:12.766868 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.767066 kubelet[2172]: E1029 04:54:12.767040 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.767066 kubelet[2172]: W1029 04:54:12.767061 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.767203 kubelet[2172]: E1029 04:54:12.767182 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.767404 kubelet[2172]: E1029 04:54:12.767353 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.767519 kubelet[2172]: W1029 04:54:12.767394 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.767933 kubelet[2172]: E1029 04:54:12.767542 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.767933 kubelet[2172]: E1029 04:54:12.767743 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.767933 kubelet[2172]: W1029 04:54:12.767757 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.767933 kubelet[2172]: E1029 04:54:12.767779 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.768147 kubelet[2172]: E1029 04:54:12.768082 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.768147 kubelet[2172]: W1029 04:54:12.768098 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.768147 kubelet[2172]: E1029 04:54:12.768119 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.768778 kubelet[2172]: E1029 04:54:12.768752 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.768778 kubelet[2172]: W1029 04:54:12.768775 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.769106 kubelet[2172]: E1029 04:54:12.768975 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.769106 kubelet[2172]: E1029 04:54:12.769018 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.769106 kubelet[2172]: W1029 04:54:12.769033 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.769106 kubelet[2172]: E1029 04:54:12.769058 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.769354 kubelet[2172]: E1029 04:54:12.769259 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.769354 kubelet[2172]: W1029 04:54:12.769274 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.769354 kubelet[2172]: E1029 04:54:12.769296 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.769594 kubelet[2172]: E1029 04:54:12.769568 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.769594 kubelet[2172]: W1029 04:54:12.769590 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.769771 kubelet[2172]: E1029 04:54:12.769614 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.769952 kubelet[2172]: E1029 04:54:12.769929 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.769952 kubelet[2172]: W1029 04:54:12.769949 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.770096 kubelet[2172]: E1029 04:54:12.769973 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.770550 kubelet[2172]: E1029 04:54:12.770524 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.770550 kubelet[2172]: W1029 04:54:12.770546 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.770798 kubelet[2172]: E1029 04:54:12.770739 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.770894 kubelet[2172]: E1029 04:54:12.770852 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.770894 kubelet[2172]: W1029 04:54:12.770867 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.770894 kubelet[2172]: E1029 04:54:12.770882 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.771165 kubelet[2172]: E1029 04:54:12.771140 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.771165 kubelet[2172]: W1029 04:54:12.771161 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.771322 kubelet[2172]: E1029 04:54:12.771177 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:12.772078 kubelet[2172]: E1029 04:54:12.772053 2172 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 29 04:54:12.772078 kubelet[2172]: W1029 04:54:12.772073 2172 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 29 04:54:12.772233 kubelet[2172]: E1029 04:54:12.772102 2172 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 29 04:54:13.360232 env[1306]: time="2025-10-29T04:54:13.360170013Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:54:13.361826 env[1306]: time="2025-10-29T04:54:13.361791479Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:54:13.363650 env[1306]: time="2025-10-29T04:54:13.363611077Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:54:13.367557 env[1306]: time="2025-10-29T04:54:13.367518092Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:54:13.371836 env[1306]: time="2025-10-29T04:54:13.371794676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 29 04:54:13.387266 env[1306]: time="2025-10-29T04:54:13.387212157Z" level=info msg="CreateContainer within sandbox \"8528ad5c6863c70de6f51458dce99c6652ccf0708b0d4f89cf65ce6624f2b48b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 29 04:54:13.404703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount907217151.mount: Deactivated successfully. Oct 29 04:54:13.408265 env[1306]: time="2025-10-29T04:54:13.408174250Z" level=info msg="CreateContainer within sandbox \"8528ad5c6863c70de6f51458dce99c6652ccf0708b0d4f89cf65ce6624f2b48b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"04f3875a0120238cdd55292ba3a0fb15b26bb1b7e439e342496f5c1ab3cef2f0\"" Oct 29 04:54:13.409292 env[1306]: time="2025-10-29T04:54:13.409199461Z" level=info msg="StartContainer for \"04f3875a0120238cdd55292ba3a0fb15b26bb1b7e439e342496f5c1ab3cef2f0\"" Oct 29 04:54:13.485622 kubelet[2172]: E1029 04:54:13.485475 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tptz2" podUID="de4b152a-29bb-4b0c-a12c-2eda92dd0564" Oct 29 04:54:13.523598 env[1306]: time="2025-10-29T04:54:13.523531191Z" level=info msg="StartContainer for \"04f3875a0120238cdd55292ba3a0fb15b26bb1b7e439e342496f5c1ab3cef2f0\" returns successfully" Oct 29 04:54:13.587398 env[1306]: time="2025-10-29T04:54:13.587292806Z" level=info msg="shim disconnected" id=04f3875a0120238cdd55292ba3a0fb15b26bb1b7e439e342496f5c1ab3cef2f0 Oct 29 04:54:13.587996 env[1306]: time="2025-10-29T04:54:13.587479656Z" level=warning msg="cleaning up after shim disconnected" id=04f3875a0120238cdd55292ba3a0fb15b26bb1b7e439e342496f5c1ab3cef2f0 namespace=k8s.io Oct 29 04:54:13.587996 env[1306]: time="2025-10-29T04:54:13.587504715Z" level=info msg="cleaning up dead shim" Oct 29 04:54:13.602013 env[1306]: time="2025-10-29T04:54:13.601937425Z" level=warning msg="cleanup warnings time=\"2025-10-29T04:54:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2847 runtime=io.containerd.runc.v2\n" Oct 29 04:54:13.631343 systemd[1]: run-containerd-runc-k8s.io-04f3875a0120238cdd55292ba3a0fb15b26bb1b7e439e342496f5c1ab3cef2f0-runc.70Hweb.mount: Deactivated successfully. Oct 29 04:54:13.631601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04f3875a0120238cdd55292ba3a0fb15b26bb1b7e439e342496f5c1ab3cef2f0-rootfs.mount: Deactivated successfully. Oct 29 04:54:13.664326 kubelet[2172]: I1029 04:54:13.664283 2172 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 04:54:13.668449 env[1306]: time="2025-10-29T04:54:13.668298277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 29 04:54:15.487639 kubelet[2172]: E1029 04:54:15.484740 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tptz2" podUID="de4b152a-29bb-4b0c-a12c-2eda92dd0564" Oct 29 04:54:17.484997 kubelet[2172]: E1029 04:54:17.484930 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tptz2" podUID="de4b152a-29bb-4b0c-a12c-2eda92dd0564" Oct 29 04:54:19.485017 kubelet[2172]: E1029 04:54:19.483879 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tptz2" podUID="de4b152a-29bb-4b0c-a12c-2eda92dd0564" Oct 29 04:54:19.513079 env[1306]: time="2025-10-29T04:54:19.513018166Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:54:19.515258 env[1306]: time="2025-10-29T04:54:19.515216947Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:54:19.518766 env[1306]: time="2025-10-29T04:54:19.518533107Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:54:19.521552 env[1306]: time="2025-10-29T04:54:19.521436111Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:54:19.522585 env[1306]: time="2025-10-29T04:54:19.522532785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 29 04:54:19.527329 env[1306]: time="2025-10-29T04:54:19.527286842Z" level=info msg="CreateContainer within sandbox \"8528ad5c6863c70de6f51458dce99c6652ccf0708b0d4f89cf65ce6624f2b48b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 29 04:54:19.544646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2282365306.mount: Deactivated successfully. Oct 29 04:54:19.550389 env[1306]: time="2025-10-29T04:54:19.550247177Z" level=info msg="CreateContainer within sandbox \"8528ad5c6863c70de6f51458dce99c6652ccf0708b0d4f89cf65ce6624f2b48b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"37aa8fd63c64178b9be0b8c466b790d73849f19008f8826fcbdd015f38d085fd\"" Oct 29 04:54:19.551781 env[1306]: time="2025-10-29T04:54:19.551741897Z" level=info msg="StartContainer for \"37aa8fd63c64178b9be0b8c466b790d73849f19008f8826fcbdd015f38d085fd\"" Oct 29 04:54:19.658465 env[1306]: time="2025-10-29T04:54:19.658411392Z" level=info msg="StartContainer for \"37aa8fd63c64178b9be0b8c466b790d73849f19008f8826fcbdd015f38d085fd\" returns successfully" Oct 29 04:54:20.539694 systemd[1]: run-containerd-runc-k8s.io-37aa8fd63c64178b9be0b8c466b790d73849f19008f8826fcbdd015f38d085fd-runc.8nsgku.mount: Deactivated successfully. Oct 29 04:54:20.776871 env[1306]: time="2025-10-29T04:54:20.776754445Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 04:54:20.807016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37aa8fd63c64178b9be0b8c466b790d73849f19008f8826fcbdd015f38d085fd-rootfs.mount: Deactivated successfully. Oct 29 04:54:20.811861 env[1306]: time="2025-10-29T04:54:20.811750099Z" level=info msg="shim disconnected" id=37aa8fd63c64178b9be0b8c466b790d73849f19008f8826fcbdd015f38d085fd Oct 29 04:54:20.811990 env[1306]: time="2025-10-29T04:54:20.811864191Z" level=warning msg="cleaning up after shim disconnected" id=37aa8fd63c64178b9be0b8c466b790d73849f19008f8826fcbdd015f38d085fd namespace=k8s.io Oct 29 04:54:20.811990 env[1306]: time="2025-10-29T04:54:20.811886582Z" level=info msg="cleaning up dead shim" Oct 29 04:54:20.824139 env[1306]: time="2025-10-29T04:54:20.824077417Z" level=warning msg="cleanup warnings time=\"2025-10-29T04:54:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2920 runtime=io.containerd.runc.v2\n" Oct 29 04:54:20.874218 kubelet[2172]: I1029 04:54:20.874166 2172 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 29 04:54:21.021891 kubelet[2172]: I1029 04:54:21.021830 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/817b634e-d1b7-42d4-a9ce-40f3f46513d8-whisker-ca-bundle\") pod \"whisker-65797c84c4-zrdnx\" (UID: \"817b634e-d1b7-42d4-a9ce-40f3f46513d8\") " pod="calico-system/whisker-65797c84c4-zrdnx" Oct 29 04:54:21.022288 kubelet[2172]: I1029 04:54:21.022257 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm8fq\" (UniqueName: \"kubernetes.io/projected/b4da2a97-feea-487c-8384-a94163380e6f-kube-api-access-nm8fq\") pod \"calico-apiserver-678d6449b5-m8bcr\" (UID: \"b4da2a97-feea-487c-8384-a94163380e6f\") " pod="calico-apiserver/calico-apiserver-678d6449b5-m8bcr" Oct 29 04:54:21.022537 kubelet[2172]: I1029 04:54:21.022506 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4d534185-a9e7-4c27-807c-917c6d4b755f-goldmane-key-pair\") pod \"goldmane-666569f655-4pd8r\" (UID: \"4d534185-a9e7-4c27-807c-917c6d4b755f\") " pod="calico-system/goldmane-666569f655-4pd8r" Oct 29 04:54:21.022696 kubelet[2172]: I1029 04:54:21.022667 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c2b379e-441a-4610-b0bd-30a6fa391f82-tigera-ca-bundle\") pod \"calico-kube-controllers-865b7496cf-28bh8\" (UID: \"9c2b379e-441a-4610-b0bd-30a6fa391f82\") " pod="calico-system/calico-kube-controllers-865b7496cf-28bh8" Oct 29 04:54:21.026005 kubelet[2172]: I1029 04:54:21.025959 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r268s\" (UniqueName: \"kubernetes.io/projected/4d534185-a9e7-4c27-807c-917c6d4b755f-kube-api-access-r268s\") pod \"goldmane-666569f655-4pd8r\" (UID: \"4d534185-a9e7-4c27-807c-917c6d4b755f\") " pod="calico-system/goldmane-666569f655-4pd8r" Oct 29 04:54:21.026186 kubelet[2172]: I1029 04:54:21.026019 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b4da2a97-feea-487c-8384-a94163380e6f-calico-apiserver-certs\") pod \"calico-apiserver-678d6449b5-m8bcr\" (UID: \"b4da2a97-feea-487c-8384-a94163380e6f\") " pod="calico-apiserver/calico-apiserver-678d6449b5-m8bcr" Oct 29 04:54:21.026186 kubelet[2172]: I1029 04:54:21.026052 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcr87\" (UniqueName: \"kubernetes.io/projected/c95faceb-3919-455c-bd6c-4a68d6375a6d-kube-api-access-kcr87\") pod \"calico-apiserver-678d6449b5-8q748\" (UID: \"c95faceb-3919-455c-bd6c-4a68d6375a6d\") " pod="calico-apiserver/calico-apiserver-678d6449b5-8q748" Oct 29 04:54:21.026186 kubelet[2172]: I1029 04:54:21.026089 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/817b634e-d1b7-42d4-a9ce-40f3f46513d8-whisker-backend-key-pair\") pod \"whisker-65797c84c4-zrdnx\" (UID: \"817b634e-d1b7-42d4-a9ce-40f3f46513d8\") " pod="calico-system/whisker-65797c84c4-zrdnx" Oct 29 04:54:21.026186 kubelet[2172]: I1029 04:54:21.026132 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b87dfa06-fb00-43d6-9e83-3b9e31aa23c5-config-volume\") pod \"coredns-668d6bf9bc-lslb8\" (UID: \"b87dfa06-fb00-43d6-9e83-3b9e31aa23c5\") " pod="kube-system/coredns-668d6bf9bc-lslb8" Oct 29 04:54:21.026186 kubelet[2172]: I1029 04:54:21.026166 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d534185-a9e7-4c27-807c-917c6d4b755f-goldmane-ca-bundle\") pod \"goldmane-666569f655-4pd8r\" (UID: \"4d534185-a9e7-4c27-807c-917c6d4b755f\") " pod="calico-system/goldmane-666569f655-4pd8r" Oct 29 04:54:21.026887 kubelet[2172]: I1029 04:54:21.026195 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wqsh\" (UniqueName: \"kubernetes.io/projected/817b634e-d1b7-42d4-a9ce-40f3f46513d8-kube-api-access-2wqsh\") pod \"whisker-65797c84c4-zrdnx\" (UID: \"817b634e-d1b7-42d4-a9ce-40f3f46513d8\") " pod="calico-system/whisker-65797c84c4-zrdnx" Oct 29 04:54:21.026887 kubelet[2172]: I1029 04:54:21.026222 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14f0fbf4-8cf4-45f3-bb19-eb68dd03b78e-config-volume\") pod \"coredns-668d6bf9bc-s5c5l\" (UID: \"14f0fbf4-8cf4-45f3-bb19-eb68dd03b78e\") " pod="kube-system/coredns-668d6bf9bc-s5c5l" Oct 29 04:54:21.026887 kubelet[2172]: I1029 04:54:21.026264 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d534185-a9e7-4c27-807c-917c6d4b755f-config\") pod \"goldmane-666569f655-4pd8r\" (UID: \"4d534185-a9e7-4c27-807c-917c6d4b755f\") " pod="calico-system/goldmane-666569f655-4pd8r" Oct 29 04:54:21.026887 kubelet[2172]: I1029 04:54:21.026292 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c95faceb-3919-455c-bd6c-4a68d6375a6d-calico-apiserver-certs\") pod \"calico-apiserver-678d6449b5-8q748\" (UID: \"c95faceb-3919-455c-bd6c-4a68d6375a6d\") " pod="calico-apiserver/calico-apiserver-678d6449b5-8q748" Oct 29 04:54:21.026887 kubelet[2172]: I1029 04:54:21.026326 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlhdt\" (UniqueName: \"kubernetes.io/projected/b87dfa06-fb00-43d6-9e83-3b9e31aa23c5-kube-api-access-jlhdt\") pod \"coredns-668d6bf9bc-lslb8\" (UID: \"b87dfa06-fb00-43d6-9e83-3b9e31aa23c5\") " pod="kube-system/coredns-668d6bf9bc-lslb8" Oct 29 04:54:21.027523 kubelet[2172]: I1029 04:54:21.026368 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qkdb\" (UniqueName: \"kubernetes.io/projected/9c2b379e-441a-4610-b0bd-30a6fa391f82-kube-api-access-2qkdb\") pod \"calico-kube-controllers-865b7496cf-28bh8\" (UID: \"9c2b379e-441a-4610-b0bd-30a6fa391f82\") " pod="calico-system/calico-kube-controllers-865b7496cf-28bh8" Oct 29 04:54:21.027523 kubelet[2172]: I1029 04:54:21.026412 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc8zq\" (UniqueName: \"kubernetes.io/projected/14f0fbf4-8cf4-45f3-bb19-eb68dd03b78e-kube-api-access-tc8zq\") pod \"coredns-668d6bf9bc-s5c5l\" (UID: \"14f0fbf4-8cf4-45f3-bb19-eb68dd03b78e\") " pod="kube-system/coredns-668d6bf9bc-s5c5l" Oct 29 04:54:21.272825 env[1306]: time="2025-10-29T04:54:21.272761853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-678d6449b5-8q748,Uid:c95faceb-3919-455c-bd6c-4a68d6375a6d,Namespace:calico-apiserver,Attempt:0,}" Oct 29 04:54:21.273236 env[1306]: time="2025-10-29T04:54:21.272814910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lslb8,Uid:b87dfa06-fb00-43d6-9e83-3b9e31aa23c5,Namespace:kube-system,Attempt:0,}" Oct 29 04:54:21.275589 env[1306]: time="2025-10-29T04:54:21.275537027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-678d6449b5-m8bcr,Uid:b4da2a97-feea-487c-8384-a94163380e6f,Namespace:calico-apiserver,Attempt:0,}" Oct 29 04:54:21.307721 env[1306]: time="2025-10-29T04:54:21.307663255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s5c5l,Uid:14f0fbf4-8cf4-45f3-bb19-eb68dd03b78e,Namespace:kube-system,Attempt:0,}" Oct 29 04:54:21.309173 env[1306]: time="2025-10-29T04:54:21.309007212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65797c84c4-zrdnx,Uid:817b634e-d1b7-42d4-a9ce-40f3f46513d8,Namespace:calico-system,Attempt:0,}" Oct 29 04:54:21.313832 env[1306]: time="2025-10-29T04:54:21.313786248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4pd8r,Uid:4d534185-a9e7-4c27-807c-917c6d4b755f,Namespace:calico-system,Attempt:0,}" Oct 29 04:54:21.314251 env[1306]: time="2025-10-29T04:54:21.314211982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-865b7496cf-28bh8,Uid:9c2b379e-441a-4610-b0bd-30a6fa391f82,Namespace:calico-system,Attempt:0,}" Oct 29 04:54:21.501883 env[1306]: time="2025-10-29T04:54:21.501789274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tptz2,Uid:de4b152a-29bb-4b0c-a12c-2eda92dd0564,Namespace:calico-system,Attempt:0,}" Oct 29 04:54:21.710546 env[1306]: time="2025-10-29T04:54:21.707617714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 29 04:54:21.759274 env[1306]: time="2025-10-29T04:54:21.759174207Z" level=error msg="Failed to destroy network for sandbox \"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.763171 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e-shm.mount: Deactivated successfully. Oct 29 04:54:21.764905 env[1306]: time="2025-10-29T04:54:21.764854721Z" level=error msg="encountered an error cleaning up failed sandbox \"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.765018 env[1306]: time="2025-10-29T04:54:21.764943083Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-678d6449b5-m8bcr,Uid:b4da2a97-feea-487c-8384-a94163380e6f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.768086 kubelet[2172]: E1029 04:54:21.765399 2172 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.770657 kubelet[2172]: E1029 04:54:21.770014 2172 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-678d6449b5-m8bcr" Oct 29 04:54:21.770657 kubelet[2172]: E1029 04:54:21.770191 2172 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-678d6449b5-m8bcr" Oct 29 04:54:21.773096 kubelet[2172]: E1029 04:54:21.770295 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-678d6449b5-m8bcr_calico-apiserver(b4da2a97-feea-487c-8384-a94163380e6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-678d6449b5-m8bcr_calico-apiserver(b4da2a97-feea-487c-8384-a94163380e6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-678d6449b5-m8bcr" podUID="b4da2a97-feea-487c-8384-a94163380e6f" Oct 29 04:54:21.836087 env[1306]: time="2025-10-29T04:54:21.835997602Z" level=error msg="Failed to destroy network for sandbox \"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.840146 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b-shm.mount: Deactivated successfully. Oct 29 04:54:21.841728 env[1306]: time="2025-10-29T04:54:21.841675647Z" level=error msg="encountered an error cleaning up failed sandbox \"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.842903 env[1306]: time="2025-10-29T04:54:21.841903892Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4pd8r,Uid:4d534185-a9e7-4c27-807c-917c6d4b755f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.843431 kubelet[2172]: E1029 04:54:21.842234 2172 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.843431 kubelet[2172]: E1029 04:54:21.842311 2172 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-4pd8r" Oct 29 04:54:21.843431 kubelet[2172]: E1029 04:54:21.842343 2172 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-4pd8r" Oct 29 04:54:21.844158 kubelet[2172]: E1029 04:54:21.842473 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-4pd8r_calico-system(4d534185-a9e7-4c27-807c-917c6d4b755f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-4pd8r_calico-system(4d534185-a9e7-4c27-807c-917c6d4b755f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-4pd8r" podUID="4d534185-a9e7-4c27-807c-917c6d4b755f" Oct 29 04:54:21.871247 env[1306]: time="2025-10-29T04:54:21.871157181Z" level=error msg="Failed to destroy network for sandbox \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.871834 env[1306]: time="2025-10-29T04:54:21.871785473Z" level=error msg="encountered an error cleaning up failed sandbox \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.871929 env[1306]: time="2025-10-29T04:54:21.871856441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s5c5l,Uid:14f0fbf4-8cf4-45f3-bb19-eb68dd03b78e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.874341 kubelet[2172]: E1029 04:54:21.872208 2172 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.874341 kubelet[2172]: E1029 04:54:21.872293 2172 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s5c5l" Oct 29 04:54:21.874341 kubelet[2172]: E1029 04:54:21.872336 2172 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s5c5l" Oct 29 04:54:21.876931 kubelet[2172]: E1029 04:54:21.872432 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-s5c5l_kube-system(14f0fbf4-8cf4-45f3-bb19-eb68dd03b78e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-s5c5l_kube-system(14f0fbf4-8cf4-45f3-bb19-eb68dd03b78e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s5c5l" podUID="14f0fbf4-8cf4-45f3-bb19-eb68dd03b78e" Oct 29 04:54:21.878551 kubelet[2172]: I1029 04:54:21.877313 2172 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 04:54:21.945968 env[1306]: time="2025-10-29T04:54:21.945787902Z" level=error msg="Failed to destroy network for sandbox \"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.950084 env[1306]: time="2025-10-29T04:54:21.947671498Z" level=error msg="encountered an error cleaning up failed sandbox \"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.950084 env[1306]: time="2025-10-29T04:54:21.947772983Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65797c84c4-zrdnx,Uid:817b634e-d1b7-42d4-a9ce-40f3f46513d8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.953340 kubelet[2172]: E1029 04:54:21.949607 2172 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.953340 kubelet[2172]: E1029 04:54:21.949686 2172 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65797c84c4-zrdnx" Oct 29 04:54:21.953340 kubelet[2172]: E1029 04:54:21.949718 2172 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65797c84c4-zrdnx" Oct 29 04:54:21.953801 kubelet[2172]: E1029 04:54:21.949781 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65797c84c4-zrdnx_calico-system(817b634e-d1b7-42d4-a9ce-40f3f46513d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65797c84c4-zrdnx_calico-system(817b634e-d1b7-42d4-a9ce-40f3f46513d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65797c84c4-zrdnx" podUID="817b634e-d1b7-42d4-a9ce-40f3f46513d8" Oct 29 04:54:21.954129 env[1306]: time="2025-10-29T04:54:21.954076817Z" level=error msg="Failed to destroy network for sandbox \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.954728 env[1306]: time="2025-10-29T04:54:21.954682333Z" level=error msg="encountered an error cleaning up failed sandbox \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.954980 env[1306]: time="2025-10-29T04:54:21.954930801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lslb8,Uid:b87dfa06-fb00-43d6-9e83-3b9e31aa23c5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.955536 kubelet[2172]: E1029 04:54:21.955272 2172 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.955536 kubelet[2172]: E1029 04:54:21.955325 2172 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lslb8" Oct 29 04:54:21.955536 kubelet[2172]: E1029 04:54:21.955358 2172 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lslb8" Oct 29 04:54:21.957971 kubelet[2172]: E1029 04:54:21.955440 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lslb8_kube-system(b87dfa06-fb00-43d6-9e83-3b9e31aa23c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lslb8_kube-system(b87dfa06-fb00-43d6-9e83-3b9e31aa23c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lslb8" podUID="b87dfa06-fb00-43d6-9e83-3b9e31aa23c5" Oct 29 04:54:21.958698 env[1306]: time="2025-10-29T04:54:21.958645963Z" level=error msg="Failed to destroy network for sandbox \"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.967623 env[1306]: time="2025-10-29T04:54:21.962095992Z" level=error msg="Failed to destroy network for sandbox \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.968450 env[1306]: time="2025-10-29T04:54:21.968362451Z" level=error msg="encountered an error cleaning up failed sandbox \"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.968646 env[1306]: time="2025-10-29T04:54:21.968598820Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tptz2,Uid:de4b152a-29bb-4b0c-a12c-2eda92dd0564,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.968796 env[1306]: time="2025-10-29T04:54:21.968744436Z" level=error msg="encountered an error cleaning up failed sandbox \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.968885 env[1306]: time="2025-10-29T04:54:21.968820401Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-678d6449b5-8q748,Uid:c95faceb-3919-455c-bd6c-4a68d6375a6d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.969886 kubelet[2172]: E1029 04:54:21.969208 2172 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.969886 kubelet[2172]: E1029 04:54:21.969285 2172 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tptz2" Oct 29 04:54:21.969886 kubelet[2172]: E1029 04:54:21.969334 2172 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tptz2" Oct 29 04:54:21.970114 kubelet[2172]: E1029 04:54:21.969421 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tptz2_calico-system(de4b152a-29bb-4b0c-a12c-2eda92dd0564)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tptz2_calico-system(de4b152a-29bb-4b0c-a12c-2eda92dd0564)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tptz2" podUID="de4b152a-29bb-4b0c-a12c-2eda92dd0564" Oct 29 04:54:21.970114 kubelet[2172]: E1029 04:54:21.969663 2172 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.970114 kubelet[2172]: E1029 04:54:21.969707 2172 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-678d6449b5-8q748" Oct 29 04:54:21.970328 kubelet[2172]: E1029 04:54:21.969732 2172 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-678d6449b5-8q748" Oct 29 04:54:21.970328 kubelet[2172]: E1029 04:54:21.969800 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-678d6449b5-8q748_calico-apiserver(c95faceb-3919-455c-bd6c-4a68d6375a6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-678d6449b5-8q748_calico-apiserver(c95faceb-3919-455c-bd6c-4a68d6375a6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-678d6449b5-8q748" podUID="c95faceb-3919-455c-bd6c-4a68d6375a6d" Oct 29 04:54:21.986630 env[1306]: time="2025-10-29T04:54:21.986549469Z" level=error msg="Failed to destroy network for sandbox \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.987104 env[1306]: time="2025-10-29T04:54:21.987054172Z" level=error msg="encountered an error cleaning up failed sandbox \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.988650 env[1306]: time="2025-10-29T04:54:21.987131209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-865b7496cf-28bh8,Uid:9c2b379e-441a-4610-b0bd-30a6fa391f82,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.988781 kubelet[2172]: E1029 04:54:21.987476 2172 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:21.988781 kubelet[2172]: E1029 04:54:21.987541 2172 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-865b7496cf-28bh8" Oct 29 04:54:21.988781 kubelet[2172]: E1029 04:54:21.987572 2172 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-865b7496cf-28bh8" Oct 29 04:54:21.988992 kubelet[2172]: E1029 04:54:21.987629 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-865b7496cf-28bh8_calico-system(9c2b379e-441a-4610-b0bd-30a6fa391f82)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-865b7496cf-28bh8_calico-system(9c2b379e-441a-4610-b0bd-30a6fa391f82)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-865b7496cf-28bh8" podUID="9c2b379e-441a-4610-b0bd-30a6fa391f82" Oct 29 04:54:21.994000 audit[3161]: NETFILTER_CFG table=filter:103 family=2 entries=21 op=nft_register_rule pid=3161 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:22.001391 kernel: kauditd_printk_skb: 8 callbacks suppressed Oct 29 04:54:22.001539 kernel: audit: type=1325 audit(1761713661.994:311): table=filter:103 family=2 entries=21 op=nft_register_rule pid=3161 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:21.994000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe9f4c2320 a2=0 a3=7ffe9f4c230c items=0 ppid=2278 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:21.994000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:22.017728 kernel: audit: type=1300 audit(1761713661.994:311): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe9f4c2320 a2=0 a3=7ffe9f4c230c items=0 ppid=2278 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:22.017858 kernel: audit: type=1327 audit(1761713661.994:311): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:22.018093 kernel: audit: type=1325 audit(1761713662.006:312): table=nat:104 family=2 entries=19 op=nft_register_chain pid=3161 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:22.006000 audit[3161]: NETFILTER_CFG table=nat:104 family=2 entries=19 op=nft_register_chain pid=3161 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:22.006000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe9f4c2320 a2=0 a3=7ffe9f4c230c items=0 ppid=2278 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:22.029581 kernel: audit: type=1300 audit(1761713662.006:312): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffe9f4c2320 a2=0 a3=7ffe9f4c230c items=0 ppid=2278 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:22.006000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:22.033611 kernel: audit: type=1327 audit(1761713662.006:312): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:22.540651 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4-shm.mount: Deactivated successfully. Oct 29 04:54:22.540878 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6-shm.mount: Deactivated successfully. Oct 29 04:54:22.541061 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1-shm.mount: Deactivated successfully. Oct 29 04:54:22.541245 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5-shm.mount: Deactivated successfully. Oct 29 04:54:22.541441 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1-shm.mount: Deactivated successfully. Oct 29 04:54:22.541591 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4-shm.mount: Deactivated successfully. Oct 29 04:54:22.696523 kubelet[2172]: I1029 04:54:22.695661 2172 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" Oct 29 04:54:22.703462 kubelet[2172]: I1029 04:54:22.703432 2172 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" Oct 29 04:54:22.706084 env[1306]: time="2025-10-29T04:54:22.705980347Z" level=info msg="StopPodSandbox for \"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5\"" Oct 29 04:54:22.706795 env[1306]: time="2025-10-29T04:54:22.706041602Z" level=info msg="StopPodSandbox for \"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e\"" Oct 29 04:54:22.708459 kubelet[2172]: I1029 04:54:22.708434 2172 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Oct 29 04:54:22.709476 env[1306]: time="2025-10-29T04:54:22.709441936Z" level=info msg="StopPodSandbox for \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\"" Oct 29 04:54:22.711878 kubelet[2172]: I1029 04:54:22.711677 2172 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Oct 29 04:54:22.713470 env[1306]: time="2025-10-29T04:54:22.713429322Z" level=info msg="StopPodSandbox for \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\"" Oct 29 04:54:22.716367 kubelet[2172]: I1029 04:54:22.715733 2172 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Oct 29 04:54:22.717522 env[1306]: time="2025-10-29T04:54:22.717451877Z" level=info msg="StopPodSandbox for \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\"" Oct 29 04:54:22.722538 kubelet[2172]: I1029 04:54:22.721065 2172 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Oct 29 04:54:22.722657 env[1306]: time="2025-10-29T04:54:22.722330288Z" level=info msg="StopPodSandbox for \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\"" Oct 29 04:54:22.724905 kubelet[2172]: I1029 04:54:22.723998 2172 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" Oct 29 04:54:22.725530 env[1306]: time="2025-10-29T04:54:22.725483647Z" level=info msg="StopPodSandbox for \"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4\"" Oct 29 04:54:22.728205 kubelet[2172]: I1029 04:54:22.727499 2172 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" Oct 29 04:54:22.730419 env[1306]: time="2025-10-29T04:54:22.729889240Z" level=info msg="StopPodSandbox for \"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b\"" Oct 29 04:54:22.926204 env[1306]: time="2025-10-29T04:54:22.926122312Z" level=error msg="StopPodSandbox for \"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e\" failed" error="failed to destroy network for sandbox \"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:22.927552 kubelet[2172]: E1029 04:54:22.927187 2172 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" Oct 29 04:54:22.930607 kubelet[2172]: E1029 04:54:22.927357 2172 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e"} Oct 29 04:54:22.930607 kubelet[2172]: E1029 04:54:22.930462 2172 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b4da2a97-feea-487c-8384-a94163380e6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 29 04:54:22.930607 kubelet[2172]: E1029 04:54:22.930539 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b4da2a97-feea-487c-8384-a94163380e6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-678d6449b5-m8bcr" podUID="b4da2a97-feea-487c-8384-a94163380e6f" Oct 29 04:54:22.942921 env[1306]: time="2025-10-29T04:54:22.942846330Z" level=error msg="StopPodSandbox for \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\" failed" error="failed to destroy network for sandbox \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:22.943209 kubelet[2172]: E1029 04:54:22.943122 2172 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Oct 29 04:54:22.943315 kubelet[2172]: E1029 04:54:22.943224 2172 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1"} Oct 29 04:54:22.943315 kubelet[2172]: E1029 04:54:22.943276 2172 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"14f0fbf4-8cf4-45f3-bb19-eb68dd03b78e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 29 04:54:22.943531 kubelet[2172]: E1029 04:54:22.943306 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"14f0fbf4-8cf4-45f3-bb19-eb68dd03b78e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s5c5l" podUID="14f0fbf4-8cf4-45f3-bb19-eb68dd03b78e" Oct 29 04:54:22.952235 env[1306]: time="2025-10-29T04:54:22.952174634Z" level=error msg="StopPodSandbox for \"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5\" failed" error="failed to destroy network for sandbox \"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:22.952782 kubelet[2172]: E1029 04:54:22.952531 2172 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" Oct 29 04:54:22.952782 kubelet[2172]: E1029 04:54:22.952606 2172 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5"} Oct 29 04:54:22.952782 kubelet[2172]: E1029 04:54:22.952651 2172 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"817b634e-d1b7-42d4-a9ce-40f3f46513d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 29 04:54:22.952782 kubelet[2172]: E1029 04:54:22.952702 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"817b634e-d1b7-42d4-a9ce-40f3f46513d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65797c84c4-zrdnx" podUID="817b634e-d1b7-42d4-a9ce-40f3f46513d8" Oct 29 04:54:22.965609 env[1306]: time="2025-10-29T04:54:22.965502709Z" level=error msg="StopPodSandbox for \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\" failed" error="failed to destroy network for sandbox \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:22.966266 kubelet[2172]: E1029 04:54:22.965986 2172 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Oct 29 04:54:22.966266 kubelet[2172]: E1029 04:54:22.966071 2172 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1"} Oct 29 04:54:22.966266 kubelet[2172]: E1029 04:54:22.966120 2172 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b87dfa06-fb00-43d6-9e83-3b9e31aa23c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 29 04:54:22.966266 kubelet[2172]: E1029 04:54:22.966186 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b87dfa06-fb00-43d6-9e83-3b9e31aa23c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lslb8" podUID="b87dfa06-fb00-43d6-9e83-3b9e31aa23c5" Oct 29 04:54:22.967772 env[1306]: time="2025-10-29T04:54:22.967690737Z" level=error msg="StopPodSandbox for \"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b\" failed" error="failed to destroy network for sandbox \"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:22.968406 env[1306]: time="2025-10-29T04:54:22.967884977Z" level=error msg="StopPodSandbox for \"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4\" failed" error="failed to destroy network for sandbox \"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:22.968497 kubelet[2172]: E1029 04:54:22.968066 2172 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" Oct 29 04:54:22.968497 kubelet[2172]: E1029 04:54:22.968139 2172 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4"} Oct 29 04:54:22.968497 kubelet[2172]: E1029 04:54:22.968210 2172 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"de4b152a-29bb-4b0c-a12c-2eda92dd0564\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 29 04:54:22.968497 kubelet[2172]: E1029 04:54:22.968243 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"de4b152a-29bb-4b0c-a12c-2eda92dd0564\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tptz2" podUID="de4b152a-29bb-4b0c-a12c-2eda92dd0564" Oct 29 04:54:22.969106 kubelet[2172]: E1029 04:54:22.968306 2172 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" Oct 29 04:54:22.969106 kubelet[2172]: E1029 04:54:22.968338 2172 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b"} Oct 29 04:54:22.969106 kubelet[2172]: E1029 04:54:22.968855 2172 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4d534185-a9e7-4c27-807c-917c6d4b755f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 29 04:54:22.969106 kubelet[2172]: E1029 04:54:22.968907 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4d534185-a9e7-4c27-807c-917c6d4b755f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-4pd8r" podUID="4d534185-a9e7-4c27-807c-917c6d4b755f" Oct 29 04:54:22.969984 env[1306]: time="2025-10-29T04:54:22.968922248Z" level=error msg="StopPodSandbox for \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\" failed" error="failed to destroy network for sandbox \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:22.970058 kubelet[2172]: E1029 04:54:22.969624 2172 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Oct 29 04:54:22.970058 kubelet[2172]: E1029 04:54:22.969673 2172 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4"} Oct 29 04:54:22.970058 kubelet[2172]: E1029 04:54:22.969757 2172 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c95faceb-3919-455c-bd6c-4a68d6375a6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 29 04:54:22.970058 kubelet[2172]: E1029 04:54:22.969809 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c95faceb-3919-455c-bd6c-4a68d6375a6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-678d6449b5-8q748" podUID="c95faceb-3919-455c-bd6c-4a68d6375a6d" Oct 29 04:54:22.981903 env[1306]: time="2025-10-29T04:54:22.981850750Z" level=error msg="StopPodSandbox for \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\" failed" error="failed to destroy network for sandbox \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:22.982541 kubelet[2172]: E1029 04:54:22.982278 2172 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Oct 29 04:54:22.982541 kubelet[2172]: E1029 04:54:22.982321 2172 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6"} Oct 29 04:54:22.982541 kubelet[2172]: E1029 04:54:22.982426 2172 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c2b379e-441a-4610-b0bd-30a6fa391f82\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 29 04:54:22.982541 kubelet[2172]: E1029 04:54:22.982462 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c2b379e-441a-4610-b0bd-30a6fa391f82\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-865b7496cf-28bh8" podUID="9c2b379e-441a-4610-b0bd-30a6fa391f82" Oct 29 04:54:33.486821 env[1306]: time="2025-10-29T04:54:33.486723602Z" level=info msg="StopPodSandbox for \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\"" Oct 29 04:54:33.501414 env[1306]: time="2025-10-29T04:54:33.500838306Z" level=info msg="StopPodSandbox for \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\"" Oct 29 04:54:33.679873 env[1306]: time="2025-10-29T04:54:33.679792048Z" level=error msg="StopPodSandbox for \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\" failed" error="failed to destroy network for sandbox \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:33.680183 kubelet[2172]: E1029 04:54:33.680113 2172 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Oct 29 04:54:33.680774 kubelet[2172]: E1029 04:54:33.680202 2172 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6"} Oct 29 04:54:33.680774 kubelet[2172]: E1029 04:54:33.680256 2172 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c2b379e-441a-4610-b0bd-30a6fa391f82\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 29 04:54:33.680774 kubelet[2172]: E1029 04:54:33.680293 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c2b379e-441a-4610-b0bd-30a6fa391f82\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-865b7496cf-28bh8" podUID="9c2b379e-441a-4610-b0bd-30a6fa391f82" Oct 29 04:54:33.700130 env[1306]: time="2025-10-29T04:54:33.700042803Z" level=error msg="StopPodSandbox for \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\" failed" error="failed to destroy network for sandbox \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:33.700818 kubelet[2172]: E1029 04:54:33.700726 2172 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Oct 29 04:54:33.700935 kubelet[2172]: E1029 04:54:33.700835 2172 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1"} Oct 29 04:54:33.700935 kubelet[2172]: E1029 04:54:33.700918 2172 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"14f0fbf4-8cf4-45f3-bb19-eb68dd03b78e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 29 04:54:33.701445 kubelet[2172]: E1029 04:54:33.700974 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"14f0fbf4-8cf4-45f3-bb19-eb68dd03b78e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s5c5l" podUID="14f0fbf4-8cf4-45f3-bb19-eb68dd03b78e" Oct 29 04:54:34.067366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2254730215.mount: Deactivated successfully. Oct 29 04:54:34.106950 env[1306]: time="2025-10-29T04:54:34.106878716Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:54:34.109250 env[1306]: time="2025-10-29T04:54:34.109214814Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:54:34.111416 env[1306]: time="2025-10-29T04:54:34.111354295Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:54:34.115877 env[1306]: time="2025-10-29T04:54:34.115826105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 29 04:54:34.116335 env[1306]: time="2025-10-29T04:54:34.116297114Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 04:54:34.165719 env[1306]: time="2025-10-29T04:54:34.165555028Z" level=info msg="CreateContainer within sandbox \"8528ad5c6863c70de6f51458dce99c6652ccf0708b0d4f89cf65ce6624f2b48b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 29 04:54:34.189157 env[1306]: time="2025-10-29T04:54:34.189103914Z" level=info msg="CreateContainer within sandbox \"8528ad5c6863c70de6f51458dce99c6652ccf0708b0d4f89cf65ce6624f2b48b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"71d6bc5d021ddcc10d95b2af898e8a201f0dc4a4a5e05b9cde0366fccf7e5962\"" Oct 29 04:54:34.190304 env[1306]: time="2025-10-29T04:54:34.190267075Z" level=info msg="StartContainer for \"71d6bc5d021ddcc10d95b2af898e8a201f0dc4a4a5e05b9cde0366fccf7e5962\"" Oct 29 04:54:34.193364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3535833753.mount: Deactivated successfully. Oct 29 04:54:34.316837 env[1306]: time="2025-10-29T04:54:34.316778434Z" level=info msg="StartContainer for \"71d6bc5d021ddcc10d95b2af898e8a201f0dc4a4a5e05b9cde0366fccf7e5962\" returns successfully" Oct 29 04:54:34.484969 env[1306]: time="2025-10-29T04:54:34.484910813Z" level=info msg="StopPodSandbox for \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\"" Oct 29 04:54:34.486040 env[1306]: time="2025-10-29T04:54:34.486005346Z" level=info msg="StopPodSandbox for \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\"" Oct 29 04:54:34.564048 env[1306]: time="2025-10-29T04:54:34.563895314Z" level=error msg="StopPodSandbox for \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\" failed" error="failed to destroy network for sandbox \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:34.565638 env[1306]: time="2025-10-29T04:54:34.565326970Z" level=error msg="StopPodSandbox for \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\" failed" error="failed to destroy network for sandbox \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 29 04:54:34.566552 kubelet[2172]: E1029 04:54:34.566246 2172 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Oct 29 04:54:34.566552 kubelet[2172]: E1029 04:54:34.566246 2172 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Oct 29 04:54:34.566552 kubelet[2172]: E1029 04:54:34.566464 2172 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4"} Oct 29 04:54:34.570287 kubelet[2172]: E1029 04:54:34.566592 2172 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c95faceb-3919-455c-bd6c-4a68d6375a6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 29 04:54:34.570287 kubelet[2172]: E1029 04:54:34.566391 2172 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1"} Oct 29 04:54:34.570287 kubelet[2172]: E1029 04:54:34.566673 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c95faceb-3919-455c-bd6c-4a68d6375a6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-678d6449b5-8q748" podUID="c95faceb-3919-455c-bd6c-4a68d6375a6d" Oct 29 04:54:34.570287 kubelet[2172]: E1029 04:54:34.566712 2172 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b87dfa06-fb00-43d6-9e83-3b9e31aa23c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 29 04:54:34.605177 kubelet[2172]: E1029 04:54:34.566788 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b87dfa06-fb00-43d6-9e83-3b9e31aa23c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lslb8" podUID="b87dfa06-fb00-43d6-9e83-3b9e31aa23c5" Oct 29 04:54:34.907921 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 29 04:54:34.908196 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 29 04:54:35.131968 kubelet[2172]: I1029 04:54:35.129658 2172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4znh4" podStartSLOduration=2.079951753 podStartE2EDuration="28.126157121s" podCreationTimestamp="2025-10-29 04:54:07 +0000 UTC" firstStartedPulling="2025-10-29 04:54:08.072230329 +0000 UTC m=+22.982266311" lastFinishedPulling="2025-10-29 04:54:34.118435691 +0000 UTC m=+49.028471679" observedRunningTime="2025-10-29 04:54:34.839650725 +0000 UTC m=+49.749686750" watchObservedRunningTime="2025-10-29 04:54:35.126157121 +0000 UTC m=+50.036193109" Oct 29 04:54:35.141925 env[1306]: time="2025-10-29T04:54:35.141111581Z" level=info msg="StopPodSandbox for \"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5\"" Oct 29 04:54:35.594095 env[1306]: 2025-10-29 04:54:35.322 [INFO][3409] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" Oct 29 04:54:35.594095 env[1306]: 2025-10-29 04:54:35.322 [INFO][3409] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" iface="eth0" netns="/var/run/netns/cni-102f8a5b-66dc-ba47-2246-3597c32ecc3f" Oct 29 04:54:35.594095 env[1306]: 2025-10-29 04:54:35.323 [INFO][3409] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" iface="eth0" netns="/var/run/netns/cni-102f8a5b-66dc-ba47-2246-3597c32ecc3f" Oct 29 04:54:35.594095 env[1306]: 2025-10-29 04:54:35.324 [INFO][3409] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" iface="eth0" netns="/var/run/netns/cni-102f8a5b-66dc-ba47-2246-3597c32ecc3f" Oct 29 04:54:35.594095 env[1306]: 2025-10-29 04:54:35.324 [INFO][3409] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" Oct 29 04:54:35.594095 env[1306]: 2025-10-29 04:54:35.324 [INFO][3409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" Oct 29 04:54:35.594095 env[1306]: 2025-10-29 04:54:35.557 [INFO][3417] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" HandleID="k8s-pod-network.0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" Workload="srv--xtjva.gb1.brightbox.com-k8s-whisker--65797c84c4--zrdnx-eth0" Oct 29 04:54:35.594095 env[1306]: 2025-10-29 04:54:35.560 [INFO][3417] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:35.594095 env[1306]: 2025-10-29 04:54:35.561 [INFO][3417] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:35.594095 env[1306]: 2025-10-29 04:54:35.578 [WARNING][3417] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" HandleID="k8s-pod-network.0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" Workload="srv--xtjva.gb1.brightbox.com-k8s-whisker--65797c84c4--zrdnx-eth0" Oct 29 04:54:35.594095 env[1306]: 2025-10-29 04:54:35.578 [INFO][3417] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" HandleID="k8s-pod-network.0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" Workload="srv--xtjva.gb1.brightbox.com-k8s-whisker--65797c84c4--zrdnx-eth0" Oct 29 04:54:35.594095 env[1306]: 2025-10-29 04:54:35.589 [INFO][3417] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:35.594095 env[1306]: 2025-10-29 04:54:35.591 [INFO][3409] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" Oct 29 04:54:35.601669 env[1306]: time="2025-10-29T04:54:35.601170973Z" level=info msg="TearDown network for sandbox \"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5\" successfully" Oct 29 04:54:35.601669 env[1306]: time="2025-10-29T04:54:35.601242102Z" level=info msg="StopPodSandbox for \"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5\" returns successfully" Oct 29 04:54:35.599285 systemd[1]: run-netns-cni\x2d102f8a5b\x2d66dc\x2dba47\x2d2246\x2d3597c32ecc3f.mount: Deactivated successfully. Oct 29 04:54:35.698070 kubelet[2172]: I1029 04:54:35.695421 2172 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/817b634e-d1b7-42d4-a9ce-40f3f46513d8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "817b634e-d1b7-42d4-a9ce-40f3f46513d8" (UID: "817b634e-d1b7-42d4-a9ce-40f3f46513d8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 29 04:54:35.699964 kubelet[2172]: I1029 04:54:35.699929 2172 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/817b634e-d1b7-42d4-a9ce-40f3f46513d8-whisker-ca-bundle\") pod \"817b634e-d1b7-42d4-a9ce-40f3f46513d8\" (UID: \"817b634e-d1b7-42d4-a9ce-40f3f46513d8\") " Oct 29 04:54:35.700210 kubelet[2172]: I1029 04:54:35.700165 2172 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wqsh\" (UniqueName: \"kubernetes.io/projected/817b634e-d1b7-42d4-a9ce-40f3f46513d8-kube-api-access-2wqsh\") pod \"817b634e-d1b7-42d4-a9ce-40f3f46513d8\" (UID: \"817b634e-d1b7-42d4-a9ce-40f3f46513d8\") " Oct 29 04:54:35.700880 kubelet[2172]: I1029 04:54:35.700851 2172 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/817b634e-d1b7-42d4-a9ce-40f3f46513d8-whisker-backend-key-pair\") pod \"817b634e-d1b7-42d4-a9ce-40f3f46513d8\" (UID: \"817b634e-d1b7-42d4-a9ce-40f3f46513d8\") " Oct 29 04:54:35.701154 kubelet[2172]: I1029 04:54:35.701125 2172 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/817b634e-d1b7-42d4-a9ce-40f3f46513d8-whisker-ca-bundle\") on node \"srv-xtjva.gb1.brightbox.com\" DevicePath \"\"" Oct 29 04:54:35.709007 systemd[1]: var-lib-kubelet-pods-817b634e\x2dd1b7\x2d42d4\x2da9ce\x2d40f3f46513d8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2wqsh.mount: Deactivated successfully. Oct 29 04:54:35.711707 kubelet[2172]: I1029 04:54:35.711647 2172 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/817b634e-d1b7-42d4-a9ce-40f3f46513d8-kube-api-access-2wqsh" (OuterVolumeSpecName: "kube-api-access-2wqsh") pod "817b634e-d1b7-42d4-a9ce-40f3f46513d8" (UID: "817b634e-d1b7-42d4-a9ce-40f3f46513d8"). InnerVolumeSpecName "kube-api-access-2wqsh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 04:54:35.717651 systemd[1]: var-lib-kubelet-pods-817b634e\x2dd1b7\x2d42d4\x2da9ce\x2d40f3f46513d8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 29 04:54:35.719662 kubelet[2172]: I1029 04:54:35.719603 2172 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/817b634e-d1b7-42d4-a9ce-40f3f46513d8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "817b634e-d1b7-42d4-a9ce-40f3f46513d8" (UID: "817b634e-d1b7-42d4-a9ce-40f3f46513d8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 29 04:54:35.801775 kubelet[2172]: I1029 04:54:35.801706 2172 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2wqsh\" (UniqueName: \"kubernetes.io/projected/817b634e-d1b7-42d4-a9ce-40f3f46513d8-kube-api-access-2wqsh\") on node \"srv-xtjva.gb1.brightbox.com\" DevicePath \"\"" Oct 29 04:54:35.802044 kubelet[2172]: I1029 04:54:35.802017 2172 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/817b634e-d1b7-42d4-a9ce-40f3f46513d8-whisker-backend-key-pair\") on node \"srv-xtjva.gb1.brightbox.com\" DevicePath \"\"" Oct 29 04:54:36.003414 kubelet[2172]: I1029 04:54:36.003324 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxv89\" (UniqueName: \"kubernetes.io/projected/419b3e19-fef1-48f6-b46c-276ff1e0b621-kube-api-access-sxv89\") pod \"whisker-8647577b76-v9phq\" (UID: \"419b3e19-fef1-48f6-b46c-276ff1e0b621\") " pod="calico-system/whisker-8647577b76-v9phq" Oct 29 04:54:36.005238 kubelet[2172]: I1029 04:54:36.005206 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/419b3e19-fef1-48f6-b46c-276ff1e0b621-whisker-ca-bundle\") pod \"whisker-8647577b76-v9phq\" (UID: \"419b3e19-fef1-48f6-b46c-276ff1e0b621\") " pod="calico-system/whisker-8647577b76-v9phq" Oct 29 04:54:36.007066 kubelet[2172]: I1029 04:54:36.005441 2172 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/419b3e19-fef1-48f6-b46c-276ff1e0b621-whisker-backend-key-pair\") pod \"whisker-8647577b76-v9phq\" (UID: \"419b3e19-fef1-48f6-b46c-276ff1e0b621\") " pod="calico-system/whisker-8647577b76-v9phq" Oct 29 04:54:36.231147 env[1306]: time="2025-10-29T04:54:36.231045297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8647577b76-v9phq,Uid:419b3e19-fef1-48f6-b46c-276ff1e0b621,Namespace:calico-system,Attempt:0,}" Oct 29 04:54:36.456245 systemd-networkd[1070]: caliea94b775d2a: Link UP Oct 29 04:54:36.466943 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 29 04:54:36.467413 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliea94b775d2a: link becomes ready Oct 29 04:54:36.467216 systemd-networkd[1070]: caliea94b775d2a: Gained carrier Oct 29 04:54:36.486860 env[1306]: time="2025-10-29T04:54:36.486332194Z" level=info msg="StopPodSandbox for \"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b\"" Oct 29 04:54:36.499710 env[1306]: 2025-10-29 04:54:36.304 [INFO][3438] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 04:54:36.499710 env[1306]: 2025-10-29 04:54:36.322 [INFO][3438] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--xtjva.gb1.brightbox.com-k8s-whisker--8647577b76--v9phq-eth0 whisker-8647577b76- calico-system 419b3e19-fef1-48f6-b46c-276ff1e0b621 945 0 2025-10-29 04:54:35 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8647577b76 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-xtjva.gb1.brightbox.com whisker-8647577b76-v9phq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliea94b775d2a [] [] }} ContainerID="fecdce4723944b74b1a3c10c7a3521d654f837ed02d733444a58e9900d51f404" Namespace="calico-system" Pod="whisker-8647577b76-v9phq" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-whisker--8647577b76--v9phq-" Oct 29 04:54:36.499710 env[1306]: 2025-10-29 04:54:36.322 [INFO][3438] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fecdce4723944b74b1a3c10c7a3521d654f837ed02d733444a58e9900d51f404" Namespace="calico-system" Pod="whisker-8647577b76-v9phq" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-whisker--8647577b76--v9phq-eth0" Oct 29 04:54:36.499710 env[1306]: 2025-10-29 04:54:36.362 [INFO][3451] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fecdce4723944b74b1a3c10c7a3521d654f837ed02d733444a58e9900d51f404" HandleID="k8s-pod-network.fecdce4723944b74b1a3c10c7a3521d654f837ed02d733444a58e9900d51f404" Workload="srv--xtjva.gb1.brightbox.com-k8s-whisker--8647577b76--v9phq-eth0" Oct 29 04:54:36.499710 env[1306]: 2025-10-29 04:54:36.362 [INFO][3451] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fecdce4723944b74b1a3c10c7a3521d654f837ed02d733444a58e9900d51f404" HandleID="k8s-pod-network.fecdce4723944b74b1a3c10c7a3521d654f837ed02d733444a58e9900d51f404" Workload="srv--xtjva.gb1.brightbox.com-k8s-whisker--8647577b76--v9phq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-xtjva.gb1.brightbox.com", "pod":"whisker-8647577b76-v9phq", "timestamp":"2025-10-29 04:54:36.362337831 +0000 UTC"}, Hostname:"srv-xtjva.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 04:54:36.499710 env[1306]: 2025-10-29 04:54:36.362 [INFO][3451] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:36.499710 env[1306]: 2025-10-29 04:54:36.362 [INFO][3451] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:36.499710 env[1306]: 2025-10-29 04:54:36.363 [INFO][3451] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-xtjva.gb1.brightbox.com' Oct 29 04:54:36.499710 env[1306]: 2025-10-29 04:54:36.375 [INFO][3451] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fecdce4723944b74b1a3c10c7a3521d654f837ed02d733444a58e9900d51f404" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:36.499710 env[1306]: 2025-10-29 04:54:36.393 [INFO][3451] ipam/ipam.go 394: Looking up existing affinities for host host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:36.499710 env[1306]: 2025-10-29 04:54:36.400 [INFO][3451] ipam/ipam.go 511: Trying affinity for 192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:36.499710 env[1306]: 2025-10-29 04:54:36.403 [INFO][3451] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:36.499710 env[1306]: 2025-10-29 04:54:36.406 [INFO][3451] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:36.499710 env[1306]: 2025-10-29 04:54:36.406 [INFO][3451] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.fecdce4723944b74b1a3c10c7a3521d654f837ed02d733444a58e9900d51f404" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:36.499710 env[1306]: 2025-10-29 04:54:36.409 [INFO][3451] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fecdce4723944b74b1a3c10c7a3521d654f837ed02d733444a58e9900d51f404 Oct 29 04:54:36.499710 env[1306]: 2025-10-29 04:54:36.415 [INFO][3451] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.fecdce4723944b74b1a3c10c7a3521d654f837ed02d733444a58e9900d51f404" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:36.499710 env[1306]: 2025-10-29 04:54:36.423 [INFO][3451] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.31.129/26] block=192.168.31.128/26 handle="k8s-pod-network.fecdce4723944b74b1a3c10c7a3521d654f837ed02d733444a58e9900d51f404" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:36.499710 env[1306]: 2025-10-29 04:54:36.424 [INFO][3451] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.129/26] handle="k8s-pod-network.fecdce4723944b74b1a3c10c7a3521d654f837ed02d733444a58e9900d51f404" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:36.499710 env[1306]: 2025-10-29 04:54:36.424 [INFO][3451] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:36.499710 env[1306]: 2025-10-29 04:54:36.424 [INFO][3451] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.31.129/26] IPv6=[] ContainerID="fecdce4723944b74b1a3c10c7a3521d654f837ed02d733444a58e9900d51f404" HandleID="k8s-pod-network.fecdce4723944b74b1a3c10c7a3521d654f837ed02d733444a58e9900d51f404" Workload="srv--xtjva.gb1.brightbox.com-k8s-whisker--8647577b76--v9phq-eth0" Oct 29 04:54:36.501161 env[1306]: 2025-10-29 04:54:36.428 [INFO][3438] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fecdce4723944b74b1a3c10c7a3521d654f837ed02d733444a58e9900d51f404" Namespace="calico-system" Pod="whisker-8647577b76-v9phq" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-whisker--8647577b76--v9phq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-whisker--8647577b76--v9phq-eth0", GenerateName:"whisker-8647577b76-", Namespace:"calico-system", SelfLink:"", UID:"419b3e19-fef1-48f6-b46c-276ff1e0b621", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8647577b76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"", Pod:"whisker-8647577b76-v9phq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.31.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliea94b775d2a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:54:36.501161 env[1306]: 2025-10-29 04:54:36.428 [INFO][3438] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.129/32] ContainerID="fecdce4723944b74b1a3c10c7a3521d654f837ed02d733444a58e9900d51f404" Namespace="calico-system" Pod="whisker-8647577b76-v9phq" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-whisker--8647577b76--v9phq-eth0" Oct 29 04:54:36.501161 env[1306]: 2025-10-29 04:54:36.429 [INFO][3438] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea94b775d2a ContainerID="fecdce4723944b74b1a3c10c7a3521d654f837ed02d733444a58e9900d51f404" Namespace="calico-system" Pod="whisker-8647577b76-v9phq" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-whisker--8647577b76--v9phq-eth0" Oct 29 04:54:36.501161 env[1306]: 2025-10-29 04:54:36.470 [INFO][3438] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fecdce4723944b74b1a3c10c7a3521d654f837ed02d733444a58e9900d51f404" Namespace="calico-system" Pod="whisker-8647577b76-v9phq" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-whisker--8647577b76--v9phq-eth0" Oct 29 04:54:36.501161 env[1306]: 2025-10-29 04:54:36.473 [INFO][3438] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fecdce4723944b74b1a3c10c7a3521d654f837ed02d733444a58e9900d51f404" Namespace="calico-system" Pod="whisker-8647577b76-v9phq" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-whisker--8647577b76--v9phq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-whisker--8647577b76--v9phq-eth0", GenerateName:"whisker-8647577b76-", Namespace:"calico-system", SelfLink:"", UID:"419b3e19-fef1-48f6-b46c-276ff1e0b621", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8647577b76", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"fecdce4723944b74b1a3c10c7a3521d654f837ed02d733444a58e9900d51f404", Pod:"whisker-8647577b76-v9phq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.31.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliea94b775d2a", MAC:"6a:9b:a8:6b:61:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:54:36.501161 env[1306]: 2025-10-29 04:54:36.496 [INFO][3438] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fecdce4723944b74b1a3c10c7a3521d654f837ed02d733444a58e9900d51f404" Namespace="calico-system" Pod="whisker-8647577b76-v9phq" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-whisker--8647577b76--v9phq-eth0" Oct 29 04:54:36.518427 env[1306]: time="2025-10-29T04:54:36.518278084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 04:54:36.518763 env[1306]: time="2025-10-29T04:54:36.518358998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 04:54:36.518763 env[1306]: time="2025-10-29T04:54:36.518430213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 04:54:36.519284 env[1306]: time="2025-10-29T04:54:36.519194772Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fecdce4723944b74b1a3c10c7a3521d654f837ed02d733444a58e9900d51f404 pid=3489 runtime=io.containerd.runc.v2 Oct 29 04:54:36.658391 env[1306]: time="2025-10-29T04:54:36.658189226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8647577b76-v9phq,Uid:419b3e19-fef1-48f6-b46c-276ff1e0b621,Namespace:calico-system,Attempt:0,} returns sandbox id \"fecdce4723944b74b1a3c10c7a3521d654f837ed02d733444a58e9900d51f404\"" Oct 29 04:54:36.665871 env[1306]: time="2025-10-29T04:54:36.665558136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 04:54:36.676137 env[1306]: 2025-10-29 04:54:36.582 [INFO][3473] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" Oct 29 04:54:36.676137 env[1306]: 2025-10-29 04:54:36.582 [INFO][3473] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" iface="eth0" netns="/var/run/netns/cni-e407b808-fd17-600e-f4e2-d840333fc90b" Oct 29 04:54:36.676137 env[1306]: 2025-10-29 04:54:36.583 [INFO][3473] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" iface="eth0" netns="/var/run/netns/cni-e407b808-fd17-600e-f4e2-d840333fc90b" Oct 29 04:54:36.676137 env[1306]: 2025-10-29 04:54:36.584 [INFO][3473] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" iface="eth0" netns="/var/run/netns/cni-e407b808-fd17-600e-f4e2-d840333fc90b" Oct 29 04:54:36.676137 env[1306]: 2025-10-29 04:54:36.584 [INFO][3473] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" Oct 29 04:54:36.676137 env[1306]: 2025-10-29 04:54:36.584 [INFO][3473] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" Oct 29 04:54:36.676137 env[1306]: 2025-10-29 04:54:36.647 [INFO][3514] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" HandleID="k8s-pod-network.a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" Workload="srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0" Oct 29 04:54:36.676137 env[1306]: 2025-10-29 04:54:36.647 [INFO][3514] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:36.676137 env[1306]: 2025-10-29 04:54:36.647 [INFO][3514] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:36.676137 env[1306]: 2025-10-29 04:54:36.667 [WARNING][3514] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" HandleID="k8s-pod-network.a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" Workload="srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0" Oct 29 04:54:36.676137 env[1306]: 2025-10-29 04:54:36.667 [INFO][3514] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" HandleID="k8s-pod-network.a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" Workload="srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0" Oct 29 04:54:36.676137 env[1306]: 2025-10-29 04:54:36.671 [INFO][3514] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:36.676137 env[1306]: 2025-10-29 04:54:36.674 [INFO][3473] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" Oct 29 04:54:36.677090 env[1306]: time="2025-10-29T04:54:36.676301002Z" level=info msg="TearDown network for sandbox \"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b\" successfully" Oct 29 04:54:36.677090 env[1306]: time="2025-10-29T04:54:36.676347847Z" level=info msg="StopPodSandbox for \"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b\" returns successfully" Oct 29 04:54:36.677318 env[1306]: time="2025-10-29T04:54:36.677271375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4pd8r,Uid:4d534185-a9e7-4c27-807c-917c6d4b755f,Namespace:calico-system,Attempt:1,}" Oct 29 04:54:36.856815 systemd-networkd[1070]: cali61b4fc9dc78: Link UP Oct 29 04:54:36.867000 audit[3586]: AVC avc: denied { write } for pid=3586 comm="tee" name="fd" dev="proc" ino=30811 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 29 04:54:36.878394 kernel: audit: type=1400 audit(1761713676.867:313): avc: denied { write } for pid=3586 comm="tee" name="fd" dev="proc" ino=30811 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 29 04:54:36.878918 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali61b4fc9dc78: link becomes ready Oct 29 04:54:36.879037 systemd-networkd[1070]: cali61b4fc9dc78: Gained carrier Oct 29 04:54:36.893000 audit[3593]: AVC avc: denied { write } for pid=3593 comm="tee" name="fd" dev="proc" ino=30819 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 29 04:54:36.900414 kernel: audit: type=1400 audit(1761713676.893:314): avc: denied { write } for pid=3593 comm="tee" name="fd" dev="proc" ino=30819 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 29 04:54:36.893000 audit[3593]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffc3b3b7ca a2=241 a3=1b6 items=1 ppid=3568 pid=3593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:36.912410 kernel: audit: type=1300 audit(1761713676.893:314): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffc3b3b7ca a2=241 a3=1b6 items=1 ppid=3568 pid=3593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:36.912602 env[1306]: 2025-10-29 04:54:36.719 [INFO][3533] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 04:54:36.912602 env[1306]: 2025-10-29 04:54:36.736 [INFO][3533] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0 goldmane-666569f655- calico-system 4d534185-a9e7-4c27-807c-917c6d4b755f 950 0 2025-10-29 04:54:04 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-xtjva.gb1.brightbox.com goldmane-666569f655-4pd8r eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali61b4fc9dc78 [] [] }} ContainerID="8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda" Namespace="calico-system" Pod="goldmane-666569f655-4pd8r" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-" Oct 29 04:54:36.912602 env[1306]: 2025-10-29 04:54:36.736 [INFO][3533] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda" Namespace="calico-system" Pod="goldmane-666569f655-4pd8r" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0" Oct 29 04:54:36.912602 env[1306]: 2025-10-29 04:54:36.774 [INFO][3547] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda" HandleID="k8s-pod-network.8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda" Workload="srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0" Oct 29 04:54:36.912602 env[1306]: 2025-10-29 04:54:36.774 [INFO][3547] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda" HandleID="k8s-pod-network.8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda" Workload="srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-xtjva.gb1.brightbox.com", "pod":"goldmane-666569f655-4pd8r", "timestamp":"2025-10-29 04:54:36.774438114 +0000 UTC"}, Hostname:"srv-xtjva.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 04:54:36.912602 env[1306]: 2025-10-29 04:54:36.774 [INFO][3547] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:36.912602 env[1306]: 2025-10-29 04:54:36.775 [INFO][3547] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:36.912602 env[1306]: 2025-10-29 04:54:36.775 [INFO][3547] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-xtjva.gb1.brightbox.com' Oct 29 04:54:36.912602 env[1306]: 2025-10-29 04:54:36.785 [INFO][3547] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:36.912602 env[1306]: 2025-10-29 04:54:36.799 [INFO][3547] ipam/ipam.go 394: Looking up existing affinities for host host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:36.912602 env[1306]: 2025-10-29 04:54:36.811 [INFO][3547] ipam/ipam.go 511: Trying affinity for 192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:36.912602 env[1306]: 2025-10-29 04:54:36.815 [INFO][3547] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:36.912602 env[1306]: 2025-10-29 04:54:36.819 [INFO][3547] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:36.912602 env[1306]: 2025-10-29 04:54:36.819 [INFO][3547] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:36.912602 env[1306]: 2025-10-29 04:54:36.826 [INFO][3547] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda Oct 29 04:54:36.912602 env[1306]: 2025-10-29 04:54:36.833 [INFO][3547] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:36.912602 env[1306]: 2025-10-29 04:54:36.840 [INFO][3547] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.31.130/26] block=192.168.31.128/26 handle="k8s-pod-network.8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:36.912602 env[1306]: 2025-10-29 04:54:36.841 [INFO][3547] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.130/26] handle="k8s-pod-network.8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:36.912602 env[1306]: 2025-10-29 04:54:36.841 [INFO][3547] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:36.912602 env[1306]: 2025-10-29 04:54:36.841 [INFO][3547] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.31.130/26] IPv6=[] ContainerID="8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda" HandleID="k8s-pod-network.8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda" Workload="srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0" Oct 29 04:54:36.914045 env[1306]: 2025-10-29 04:54:36.843 [INFO][3533] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda" Namespace="calico-system" Pod="goldmane-666569f655-4pd8r" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"4d534185-a9e7-4c27-807c-917c6d4b755f", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-4pd8r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.31.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali61b4fc9dc78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:54:36.914045 env[1306]: 2025-10-29 04:54:36.843 [INFO][3533] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.130/32] ContainerID="8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda" Namespace="calico-system" Pod="goldmane-666569f655-4pd8r" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0" Oct 29 04:54:36.914045 env[1306]: 2025-10-29 04:54:36.843 [INFO][3533] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61b4fc9dc78 ContainerID="8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda" Namespace="calico-system" Pod="goldmane-666569f655-4pd8r" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0" Oct 29 04:54:36.914045 env[1306]: 2025-10-29 04:54:36.882 [INFO][3533] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda" Namespace="calico-system" Pod="goldmane-666569f655-4pd8r" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0" Oct 29 04:54:36.914045 env[1306]: 2025-10-29 04:54:36.886 [INFO][3533] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda" Namespace="calico-system" Pod="goldmane-666569f655-4pd8r" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"4d534185-a9e7-4c27-807c-917c6d4b755f", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda", Pod:"goldmane-666569f655-4pd8r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.31.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali61b4fc9dc78", MAC:"7a:be:19:74:25:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:54:36.914045 env[1306]: 2025-10-29 04:54:36.904 [INFO][3533] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda" Namespace="calico-system" Pod="goldmane-666569f655-4pd8r" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0" Oct 29 04:54:36.893000 audit: CWD cwd="/etc/service/enabled/bird/log" Oct 29 04:54:36.923409 kernel: audit: type=1307 audit(1761713676.893:314): cwd="/etc/service/enabled/bird/log" Oct 29 04:54:36.893000 audit: PATH item=0 name="/dev/fd/63" inode=29954 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:54:36.930401 kernel: audit: type=1302 audit(1761713676.893:314): item=0 name="/dev/fd/63" inode=29954 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:54:36.893000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 29 04:54:36.939363 kernel: audit: type=1327 audit(1761713676.893:314): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 29 04:54:36.952089 env[1306]: time="2025-10-29T04:54:36.951981337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 04:54:36.952264 env[1306]: time="2025-10-29T04:54:36.952129307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 04:54:36.952264 env[1306]: time="2025-10-29T04:54:36.952212378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 04:54:36.952607 env[1306]: time="2025-10-29T04:54:36.952550696Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda pid=3614 runtime=io.containerd.runc.v2 Oct 29 04:54:36.867000 audit[3586]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffed6d557c9 a2=241 a3=1b6 items=1 ppid=3569 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:36.960440 kernel: audit: type=1300 audit(1761713676.867:313): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffed6d557c9 a2=241 a3=1b6 items=1 ppid=3569 pid=3586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:36.867000 audit: CWD cwd="/etc/service/enabled/felix/log" Oct 29 04:54:36.867000 audit: PATH item=0 name="/dev/fd/63" inode=29947 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:54:36.980413 kernel: audit: type=1307 audit(1761713676.867:313): cwd="/etc/service/enabled/felix/log" Oct 29 04:54:36.980558 kernel: audit: type=1302 audit(1761713676.867:313): item=0 name="/dev/fd/63" inode=29947 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:54:36.980628 kernel: audit: type=1327 audit(1761713676.867:313): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 29 04:54:36.867000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 29 04:54:36.980752 env[1306]: time="2025-10-29T04:54:36.977062852Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:54:36.975000 audit[3634]: AVC avc: denied { write } for pid=3634 comm="tee" name="fd" dev="proc" ino=30843 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 29 04:54:36.975000 audit[3634]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd2b4467c9 a2=241 a3=1b6 items=1 ppid=3561 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:36.975000 audit: CWD cwd="/etc/service/enabled/confd/log" Oct 29 04:54:36.975000 audit: PATH item=0 name="/dev/fd/63" inode=30840 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:54:36.975000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 29 04:54:36.989000 audit[3620]: AVC avc: denied { write } for pid=3620 comm="tee" name="fd" dev="proc" ino=30847 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 29 04:54:36.989000 audit[3620]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdcffbb7c9 a2=241 a3=1b6 items=1 ppid=3570 pid=3620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:36.989000 audit: CWD cwd="/etc/service/enabled/bird6/log" Oct 29 04:54:36.989000 audit: PATH item=0 name="/dev/fd/63" inode=30829 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:54:36.989000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 29 04:54:36.991861 kubelet[2172]: E1029 04:54:36.990947 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 04:54:36.991861 kubelet[2172]: E1029 04:54:36.991047 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 04:54:36.992555 env[1306]: time="2025-10-29T04:54:36.984496376Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 04:54:36.992852 kubelet[2172]: I1029 04:54:36.991883 2172 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 04:54:37.002408 kubelet[2172]: E1029 04:54:37.001472 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1de7ade95d214bc29a765b1d29f494cd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sxv89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8647577b76-v9phq_calico-system(419b3e19-fef1-48f6-b46c-276ff1e0b621): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 04:54:37.009798 env[1306]: time="2025-10-29T04:54:37.009726496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 04:54:37.031000 audit[3648]: AVC avc: denied { write } for pid=3648 comm="tee" name="fd" dev="proc" ino=30011 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 29 04:54:37.031000 audit[3648]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeb309d7cb a2=241 a3=1b6 items=1 ppid=3567 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:37.031000 audit: CWD cwd="/etc/service/enabled/cni/log" Oct 29 04:54:37.031000 audit: PATH item=0 name="/dev/fd/63" inode=30004 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:54:37.031000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 29 04:54:37.036000 audit[3640]: AVC avc: denied { write } for pid=3640 comm="tee" name="fd" dev="proc" ino=30882 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 29 04:54:37.036000 audit[3640]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff24fff7ba a2=241 a3=1b6 items=1 ppid=3564 pid=3640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:37.036000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Oct 29 04:54:37.036000 audit: PATH item=0 name="/dev/fd/63" inode=29998 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:54:37.036000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 29 04:54:37.088000 audit[3647]: AVC avc: denied { write } for pid=3647 comm="tee" name="fd" dev="proc" ino=30026 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 29 04:54:37.088000 audit[3647]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc8eb547b9 a2=241 a3=1b6 items=1 ppid=3573 pid=3647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:37.088000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Oct 29 04:54:37.088000 audit: PATH item=0 name="/dev/fd/63" inode=30005 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 04:54:37.088000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 29 04:54:37.116512 systemd[1]: run-netns-cni\x2de407b808\x2dfd17\x2d600e\x2df4e2\x2dd840333fc90b.mount: Deactivated successfully. Oct 29 04:54:37.303152 env[1306]: time="2025-10-29T04:54:37.296611069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4pd8r,Uid:4d534185-a9e7-4c27-807c-917c6d4b755f,Namespace:calico-system,Attempt:1,} returns sandbox id \"8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda\"" Oct 29 04:54:37.350326 env[1306]: time="2025-10-29T04:54:37.350205261Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:54:37.351265 env[1306]: time="2025-10-29T04:54:37.351160504Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 04:54:37.351571 kubelet[2172]: E1029 04:54:37.351503 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 04:54:37.351729 kubelet[2172]: E1029 04:54:37.351585 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 04:54:37.351984 kubelet[2172]: E1029 04:54:37.351903 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sxv89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8647577b76-v9phq_calico-system(419b3e19-fef1-48f6-b46c-276ff1e0b621): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 04:54:37.352538 env[1306]: time="2025-10-29T04:54:37.352499728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 04:54:37.354914 kubelet[2172]: E1029 04:54:37.354851 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8647577b76-v9phq" podUID="419b3e19-fef1-48f6-b46c-276ff1e0b621" Oct 29 04:54:37.488185 env[1306]: time="2025-10-29T04:54:37.488066266Z" level=info msg="StopPodSandbox for \"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e\"" Oct 29 04:54:37.490079 kubelet[2172]: I1029 04:54:37.490014 2172 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="817b634e-d1b7-42d4-a9ce-40f3f46513d8" path="/var/lib/kubelet/pods/817b634e-d1b7-42d4-a9ce-40f3f46513d8/volumes" Oct 29 04:54:37.501449 env[1306]: time="2025-10-29T04:54:37.501330715Z" level=info msg="StopPodSandbox for \"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4\"" Oct 29 04:54:37.757465 env[1306]: time="2025-10-29T04:54:37.757082437Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:54:37.763521 env[1306]: time="2025-10-29T04:54:37.762530607Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 04:54:37.763605 kubelet[2172]: E1029 04:54:37.762896 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 04:54:37.763605 kubelet[2172]: E1029 04:54:37.762960 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 04:54:37.763605 kubelet[2172]: E1029 04:54:37.763175 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r268s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4pd8r_calico-system(4d534185-a9e7-4c27-807c-917c6d4b755f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 04:54:37.767149 kubelet[2172]: E1029 04:54:37.766837 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4pd8r" podUID="4d534185-a9e7-4c27-807c-917c6d4b755f" Oct 29 04:54:37.830420 kubelet[2172]: E1029 04:54:37.826681 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4pd8r" podUID="4d534185-a9e7-4c27-807c-917c6d4b755f" Oct 29 04:54:37.840253 kubelet[2172]: E1029 04:54:37.840135 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8647577b76-v9phq" podUID="419b3e19-fef1-48f6-b46c-276ff1e0b621" Oct 29 04:54:37.874523 env[1306]: 2025-10-29 04:54:37.712 [INFO][3728] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" Oct 29 04:54:37.874523 env[1306]: 2025-10-29 04:54:37.712 [INFO][3728] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" iface="eth0" netns="/var/run/netns/cni-403b5575-6738-c924-48f1-e6150bbdd2cd" Oct 29 04:54:37.874523 env[1306]: 2025-10-29 04:54:37.713 [INFO][3728] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" iface="eth0" netns="/var/run/netns/cni-403b5575-6738-c924-48f1-e6150bbdd2cd" Oct 29 04:54:37.874523 env[1306]: 2025-10-29 04:54:37.713 [INFO][3728] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" iface="eth0" netns="/var/run/netns/cni-403b5575-6738-c924-48f1-e6150bbdd2cd" Oct 29 04:54:37.874523 env[1306]: 2025-10-29 04:54:37.713 [INFO][3728] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" Oct 29 04:54:37.874523 env[1306]: 2025-10-29 04:54:37.713 [INFO][3728] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" Oct 29 04:54:37.874523 env[1306]: 2025-10-29 04:54:37.806 [INFO][3737] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" HandleID="k8s-pod-network.739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" Workload="srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0" Oct 29 04:54:37.874523 env[1306]: 2025-10-29 04:54:37.807 [INFO][3737] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:37.874523 env[1306]: 2025-10-29 04:54:37.807 [INFO][3737] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:37.874523 env[1306]: 2025-10-29 04:54:37.852 [WARNING][3737] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" HandleID="k8s-pod-network.739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" Workload="srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0" Oct 29 04:54:37.874523 env[1306]: 2025-10-29 04:54:37.853 [INFO][3737] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" HandleID="k8s-pod-network.739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" Workload="srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0" Oct 29 04:54:37.874523 env[1306]: 2025-10-29 04:54:37.857 [INFO][3737] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:37.874523 env[1306]: 2025-10-29 04:54:37.868 [INFO][3728] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" Oct 29 04:54:37.879645 systemd[1]: run-netns-cni\x2d403b5575\x2d6738\x2dc924\x2d48f1\x2de6150bbdd2cd.mount: Deactivated successfully. Oct 29 04:54:37.883660 env[1306]: time="2025-10-29T04:54:37.880588298Z" level=info msg="TearDown network for sandbox \"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4\" successfully" Oct 29 04:54:37.883660 env[1306]: time="2025-10-29T04:54:37.880651102Z" level=info msg="StopPodSandbox for \"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4\" returns successfully" Oct 29 04:54:37.892184 env[1306]: time="2025-10-29T04:54:37.891997215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tptz2,Uid:de4b152a-29bb-4b0c-a12c-2eda92dd0564,Namespace:calico-system,Attempt:1,}" Oct 29 04:54:37.985800 systemd-networkd[1070]: caliea94b775d2a: Gained IPv6LL Oct 29 04:54:38.084519 env[1306]: 2025-10-29 04:54:37.749 [INFO][3719] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" Oct 29 04:54:38.084519 env[1306]: 2025-10-29 04:54:37.749 [INFO][3719] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" iface="eth0" netns="/var/run/netns/cni-6ae19b85-644d-27c5-0df5-49405dd863dc" Oct 29 04:54:38.084519 env[1306]: 2025-10-29 04:54:37.750 [INFO][3719] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" iface="eth0" netns="/var/run/netns/cni-6ae19b85-644d-27c5-0df5-49405dd863dc" Oct 29 04:54:38.084519 env[1306]: 2025-10-29 04:54:37.750 [INFO][3719] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" iface="eth0" netns="/var/run/netns/cni-6ae19b85-644d-27c5-0df5-49405dd863dc" Oct 29 04:54:38.084519 env[1306]: 2025-10-29 04:54:37.750 [INFO][3719] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" Oct 29 04:54:38.084519 env[1306]: 2025-10-29 04:54:37.750 [INFO][3719] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" Oct 29 04:54:38.084519 env[1306]: 2025-10-29 04:54:37.983 [INFO][3746] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" HandleID="k8s-pod-network.e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0" Oct 29 04:54:38.084519 env[1306]: 2025-10-29 04:54:37.984 [INFO][3746] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:38.084519 env[1306]: 2025-10-29 04:54:37.984 [INFO][3746] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:38.084519 env[1306]: 2025-10-29 04:54:38.062 [WARNING][3746] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" HandleID="k8s-pod-network.e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0" Oct 29 04:54:38.084519 env[1306]: 2025-10-29 04:54:38.062 [INFO][3746] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" HandleID="k8s-pod-network.e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0" Oct 29 04:54:38.084519 env[1306]: 2025-10-29 04:54:38.066 [INFO][3746] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:38.084519 env[1306]: 2025-10-29 04:54:38.081 [INFO][3719] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" Oct 29 04:54:38.096333 env[1306]: time="2025-10-29T04:54:38.094459740Z" level=info msg="TearDown network for sandbox \"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e\" successfully" Oct 29 04:54:38.096333 env[1306]: time="2025-10-29T04:54:38.095942276Z" level=info msg="StopPodSandbox for \"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e\" returns successfully" Oct 29 04:54:38.089126 systemd[1]: run-netns-cni\x2d6ae19b85\x2d644d\x2d27c5\x2d0df5\x2d49405dd863dc.mount: Deactivated successfully. Oct 29 04:54:38.096000 audit[3791]: NETFILTER_CFG table=filter:105 family=2 entries=20 op=nft_register_rule pid=3791 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:38.096000 audit[3791]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffe34634e0 a2=0 a3=7fffe34634cc items=0 ppid=2278 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.096000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:38.101000 audit[3791]: NETFILTER_CFG table=nat:106 family=2 entries=14 op=nft_register_rule pid=3791 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:38.101000 audit[3791]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffe34634e0 a2=0 a3=0 items=0 ppid=2278 pid=3791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.101000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:38.107739 env[1306]: time="2025-10-29T04:54:38.103887235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-678d6449b5-m8bcr,Uid:b4da2a97-feea-487c-8384-a94163380e6f,Namespace:calico-apiserver,Attempt:1,}" Oct 29 04:54:38.133000 audit[3794]: NETFILTER_CFG table=filter:107 family=2 entries=20 op=nft_register_rule pid=3794 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:38.133000 audit[3794]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdbac3bf20 a2=0 a3=7ffdbac3bf0c items=0 ppid=2278 pid=3794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.133000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:38.139000 audit[3794]: NETFILTER_CFG table=nat:108 family=2 entries=14 op=nft_register_rule pid=3794 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:38.139000 audit[3794]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdbac3bf20 a2=0 a3=0 items=0 ppid=2278 pid=3794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.139000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:38.181022 systemd[1]: run-containerd-runc-k8s.io-71d6bc5d021ddcc10d95b2af898e8a201f0dc4a4a5e05b9cde0366fccf7e5962-runc.rgApaz.mount: Deactivated successfully. Oct 29 04:54:38.184601 systemd-networkd[1070]: cali61b4fc9dc78: Gained IPv6LL Oct 29 04:54:38.346000 audit[3837]: AVC avc: denied { bpf } for pid=3837 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.346000 audit[3837]: AVC avc: denied { bpf } for pid=3837 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.346000 audit[3837]: AVC avc: denied { perfmon } for pid=3837 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.346000 audit[3837]: AVC avc: denied { perfmon } for pid=3837 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.346000 audit[3837]: AVC avc: denied { perfmon } for pid=3837 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.346000 audit[3837]: AVC avc: denied { perfmon } for pid=3837 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.346000 audit[3837]: AVC avc: denied { perfmon } for pid=3837 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.346000 audit[3837]: AVC avc: denied { bpf } for pid=3837 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.346000 audit[3837]: AVC avc: denied { bpf } for pid=3837 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.346000 audit: BPF prog-id=10 op=LOAD Oct 29 04:54:38.346000 audit[3837]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff4f668990 a2=98 a3=1fffffffffffffff items=0 ppid=3576 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.346000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 29 04:54:38.375000 audit: BPF prog-id=10 op=UNLOAD Oct 29 04:54:38.377000 audit[3837]: AVC avc: denied { bpf } for pid=3837 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.377000 audit[3837]: AVC avc: denied { bpf } for pid=3837 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.377000 audit[3837]: AVC avc: denied { perfmon } for pid=3837 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.377000 audit[3837]: AVC avc: denied { perfmon } for pid=3837 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.377000 audit[3837]: AVC avc: denied { perfmon } for pid=3837 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.377000 audit[3837]: AVC avc: denied { perfmon } for pid=3837 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.377000 audit[3837]: AVC avc: denied { perfmon } for pid=3837 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.377000 audit[3837]: AVC avc: denied { bpf } for pid=3837 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.377000 audit[3837]: AVC avc: denied { bpf } for pid=3837 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.377000 audit: BPF prog-id=11 op=LOAD Oct 29 04:54:38.377000 audit[3837]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff4f668870 a2=94 a3=3 items=0 ppid=3576 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.377000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 29 04:54:38.380000 audit: BPF prog-id=11 op=UNLOAD Oct 29 04:54:38.380000 audit[3837]: AVC avc: denied { bpf } for pid=3837 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.380000 audit[3837]: AVC avc: denied { bpf } for pid=3837 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.380000 audit[3837]: AVC avc: denied { perfmon } for pid=3837 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.380000 audit[3837]: AVC avc: denied { perfmon } for pid=3837 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.380000 audit[3837]: AVC avc: denied { perfmon } for pid=3837 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.380000 audit[3837]: AVC avc: denied { perfmon } for pid=3837 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.380000 audit[3837]: AVC avc: denied { perfmon } for pid=3837 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.380000 audit[3837]: AVC avc: denied { bpf } for pid=3837 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.380000 audit[3837]: AVC avc: denied { bpf } for pid=3837 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.380000 audit: BPF prog-id=12 op=LOAD Oct 29 04:54:38.380000 audit[3837]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff4f6688b0 a2=94 a3=7fff4f668a90 items=0 ppid=3576 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.380000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 29 04:54:38.381000 audit: BPF prog-id=12 op=UNLOAD Oct 29 04:54:38.381000 audit[3837]: AVC avc: denied { perfmon } for pid=3837 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.381000 audit[3837]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7fff4f668980 a2=50 a3=a000000085 items=0 ppid=3576 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.381000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 29 04:54:38.399000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.399000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.399000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.399000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.399000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.399000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.399000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.399000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.399000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.399000 audit: BPF prog-id=13 op=LOAD Oct 29 04:54:38.399000 audit[3843]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff42e4c4e0 a2=98 a3=3 items=0 ppid=3576 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.399000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 04:54:38.401000 audit: BPF prog-id=13 op=UNLOAD Oct 29 04:54:38.405000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.405000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.405000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.405000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.405000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.405000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.405000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.405000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.405000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.405000 audit: BPF prog-id=14 op=LOAD Oct 29 04:54:38.405000 audit[3843]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff42e4c2d0 a2=94 a3=54428f items=0 ppid=3576 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.405000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 04:54:38.407000 audit: BPF prog-id=14 op=UNLOAD Oct 29 04:54:38.407000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.407000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.407000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.407000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.407000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.407000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.407000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.407000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.407000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.407000 audit: BPF prog-id=15 op=LOAD Oct 29 04:54:38.407000 audit[3843]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff42e4c300 a2=94 a3=2 items=0 ppid=3576 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.407000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 04:54:38.407000 audit: BPF prog-id=15 op=UNLOAD Oct 29 04:54:38.500491 systemd-networkd[1070]: califd213a9c53e: Link UP Oct 29 04:54:38.507303 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 29 04:54:38.507432 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califd213a9c53e: link becomes ready Oct 29 04:54:38.507623 systemd-networkd[1070]: califd213a9c53e: Gained carrier Oct 29 04:54:38.548491 env[1306]: 2025-10-29 04:54:38.154 [INFO][3762] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 29 04:54:38.548491 env[1306]: 2025-10-29 04:54:38.225 [INFO][3762] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0 csi-node-driver- calico-system de4b152a-29bb-4b0c-a12c-2eda92dd0564 967 0 2025-10-29 04:54:07 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-xtjva.gb1.brightbox.com csi-node-driver-tptz2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califd213a9c53e [] [] }} ContainerID="0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2" Namespace="calico-system" Pod="csi-node-driver-tptz2" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-" Oct 29 04:54:38.548491 env[1306]: 2025-10-29 04:54:38.225 [INFO][3762] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2" Namespace="calico-system" Pod="csi-node-driver-tptz2" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0" Oct 29 04:54:38.548491 env[1306]: 2025-10-29 04:54:38.423 [INFO][3819] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2" HandleID="k8s-pod-network.0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2" Workload="srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0" Oct 29 04:54:38.548491 env[1306]: 2025-10-29 04:54:38.423 [INFO][3819] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2" HandleID="k8s-pod-network.0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2" Workload="srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033d420), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-xtjva.gb1.brightbox.com", "pod":"csi-node-driver-tptz2", "timestamp":"2025-10-29 04:54:38.423306931 +0000 UTC"}, Hostname:"srv-xtjva.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 04:54:38.548491 env[1306]: 2025-10-29 04:54:38.423 [INFO][3819] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:38.548491 env[1306]: 2025-10-29 04:54:38.424 [INFO][3819] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:38.548491 env[1306]: 2025-10-29 04:54:38.424 [INFO][3819] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-xtjva.gb1.brightbox.com' Oct 29 04:54:38.548491 env[1306]: 2025-10-29 04:54:38.438 [INFO][3819] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:38.548491 env[1306]: 2025-10-29 04:54:38.446 [INFO][3819] ipam/ipam.go 394: Looking up existing affinities for host host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:38.548491 env[1306]: 2025-10-29 04:54:38.453 [INFO][3819] ipam/ipam.go 511: Trying affinity for 192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:38.548491 env[1306]: 2025-10-29 04:54:38.463 [INFO][3819] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:38.548491 env[1306]: 2025-10-29 04:54:38.466 [INFO][3819] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:38.548491 env[1306]: 2025-10-29 04:54:38.466 [INFO][3819] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:38.548491 env[1306]: 2025-10-29 04:54:38.472 [INFO][3819] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2 Oct 29 04:54:38.548491 env[1306]: 2025-10-29 04:54:38.481 [INFO][3819] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:38.548491 env[1306]: 2025-10-29 04:54:38.490 [INFO][3819] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.31.131/26] block=192.168.31.128/26 handle="k8s-pod-network.0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:38.548491 env[1306]: 2025-10-29 04:54:38.490 [INFO][3819] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.131/26] handle="k8s-pod-network.0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:38.548491 env[1306]: 2025-10-29 04:54:38.490 [INFO][3819] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:38.548491 env[1306]: 2025-10-29 04:54:38.490 [INFO][3819] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.31.131/26] IPv6=[] ContainerID="0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2" HandleID="k8s-pod-network.0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2" Workload="srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0" Oct 29 04:54:38.549816 env[1306]: 2025-10-29 04:54:38.497 [INFO][3762] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2" Namespace="calico-system" Pod="csi-node-driver-tptz2" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"de4b152a-29bb-4b0c-a12c-2eda92dd0564", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-tptz2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califd213a9c53e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:54:38.549816 env[1306]: 2025-10-29 04:54:38.497 [INFO][3762] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.131/32] ContainerID="0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2" Namespace="calico-system" Pod="csi-node-driver-tptz2" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0" Oct 29 04:54:38.549816 env[1306]: 2025-10-29 04:54:38.497 [INFO][3762] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd213a9c53e ContainerID="0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2" Namespace="calico-system" Pod="csi-node-driver-tptz2" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0" Oct 29 04:54:38.549816 env[1306]: 2025-10-29 04:54:38.509 [INFO][3762] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2" Namespace="calico-system" Pod="csi-node-driver-tptz2" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0" Oct 29 04:54:38.549816 env[1306]: 2025-10-29 04:54:38.514 [INFO][3762] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2" Namespace="calico-system" Pod="csi-node-driver-tptz2" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"de4b152a-29bb-4b0c-a12c-2eda92dd0564", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2", Pod:"csi-node-driver-tptz2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califd213a9c53e", MAC:"6a:9e:f4:70:5e:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:54:38.549816 env[1306]: 2025-10-29 04:54:38.545 [INFO][3762] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2" Namespace="calico-system" Pod="csi-node-driver-tptz2" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0" Oct 29 04:54:38.612973 env[1306]: time="2025-10-29T04:54:38.612761702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 04:54:38.613261 env[1306]: time="2025-10-29T04:54:38.613205087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 04:54:38.613464 env[1306]: time="2025-10-29T04:54:38.613408048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 04:54:38.619110 env[1306]: time="2025-10-29T04:54:38.615116179Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2 pid=3861 runtime=io.containerd.runc.v2 Oct 29 04:54:38.653262 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calic754e2ae9ed: link becomes ready Oct 29 04:54:38.652430 systemd-networkd[1070]: calic754e2ae9ed: Link UP Oct 29 04:54:38.652712 systemd-networkd[1070]: calic754e2ae9ed: Gained carrier Oct 29 04:54:38.674366 env[1306]: 2025-10-29 04:54:38.342 [INFO][3795] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0 calico-apiserver-678d6449b5- calico-apiserver b4da2a97-feea-487c-8384-a94163380e6f 968 0 2025-10-29 04:54:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:678d6449b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-xtjva.gb1.brightbox.com calico-apiserver-678d6449b5-m8bcr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic754e2ae9ed [] [] }} ContainerID="3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e" Namespace="calico-apiserver" Pod="calico-apiserver-678d6449b5-m8bcr" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-" Oct 29 04:54:38.674366 env[1306]: 2025-10-29 04:54:38.342 [INFO][3795] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e" Namespace="calico-apiserver" Pod="calico-apiserver-678d6449b5-m8bcr" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0" Oct 29 04:54:38.674366 env[1306]: 2025-10-29 04:54:38.555 [INFO][3839] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e" HandleID="k8s-pod-network.3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0" Oct 29 04:54:38.674366 env[1306]: 2025-10-29 04:54:38.555 [INFO][3839] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e" HandleID="k8s-pod-network.3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5b40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-xtjva.gb1.brightbox.com", "pod":"calico-apiserver-678d6449b5-m8bcr", "timestamp":"2025-10-29 04:54:38.555358522 +0000 UTC"}, Hostname:"srv-xtjva.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 04:54:38.674366 env[1306]: 2025-10-29 04:54:38.556 [INFO][3839] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:38.674366 env[1306]: 2025-10-29 04:54:38.556 [INFO][3839] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:38.674366 env[1306]: 2025-10-29 04:54:38.556 [INFO][3839] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-xtjva.gb1.brightbox.com' Oct 29 04:54:38.674366 env[1306]: 2025-10-29 04:54:38.578 [INFO][3839] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:38.674366 env[1306]: 2025-10-29 04:54:38.587 [INFO][3839] ipam/ipam.go 394: Looking up existing affinities for host host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:38.674366 env[1306]: 2025-10-29 04:54:38.596 [INFO][3839] ipam/ipam.go 511: Trying affinity for 192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:38.674366 env[1306]: 2025-10-29 04:54:38.598 [INFO][3839] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:38.674366 env[1306]: 2025-10-29 04:54:38.603 [INFO][3839] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:38.674366 env[1306]: 2025-10-29 04:54:38.604 [INFO][3839] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:38.674366 env[1306]: 2025-10-29 04:54:38.606 [INFO][3839] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e Oct 29 04:54:38.674366 env[1306]: 2025-10-29 04:54:38.618 [INFO][3839] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:38.674366 env[1306]: 2025-10-29 04:54:38.629 [INFO][3839] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.31.132/26] block=192.168.31.128/26 handle="k8s-pod-network.3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:38.674366 env[1306]: 2025-10-29 04:54:38.629 [INFO][3839] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.132/26] handle="k8s-pod-network.3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:38.674366 env[1306]: 2025-10-29 04:54:38.629 [INFO][3839] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:38.674366 env[1306]: 2025-10-29 04:54:38.629 [INFO][3839] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.31.132/26] IPv6=[] ContainerID="3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e" HandleID="k8s-pod-network.3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0" Oct 29 04:54:38.675603 env[1306]: 2025-10-29 04:54:38.633 [INFO][3795] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e" Namespace="calico-apiserver" Pod="calico-apiserver-678d6449b5-m8bcr" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0", GenerateName:"calico-apiserver-678d6449b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"b4da2a97-feea-487c-8384-a94163380e6f", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 54, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"678d6449b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-678d6449b5-m8bcr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic754e2ae9ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:54:38.675603 env[1306]: 2025-10-29 04:54:38.633 [INFO][3795] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.132/32] ContainerID="3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e" Namespace="calico-apiserver" Pod="calico-apiserver-678d6449b5-m8bcr" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0" Oct 29 04:54:38.675603 env[1306]: 2025-10-29 04:54:38.633 [INFO][3795] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic754e2ae9ed ContainerID="3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e" Namespace="calico-apiserver" Pod="calico-apiserver-678d6449b5-m8bcr" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0" Oct 29 04:54:38.675603 env[1306]: 2025-10-29 04:54:38.651 [INFO][3795] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e" Namespace="calico-apiserver" Pod="calico-apiserver-678d6449b5-m8bcr" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0" Oct 29 04:54:38.675603 env[1306]: 2025-10-29 04:54:38.652 [INFO][3795] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e" Namespace="calico-apiserver" Pod="calico-apiserver-678d6449b5-m8bcr" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0", GenerateName:"calico-apiserver-678d6449b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"b4da2a97-feea-487c-8384-a94163380e6f", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 54, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"678d6449b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e", Pod:"calico-apiserver-678d6449b5-m8bcr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic754e2ae9ed", MAC:"b6:6d:c5:89:fd:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:54:38.675603 env[1306]: 2025-10-29 04:54:38.669 [INFO][3795] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e" Namespace="calico-apiserver" Pod="calico-apiserver-678d6449b5-m8bcr" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0" Oct 29 04:54:38.740155 env[1306]: time="2025-10-29T04:54:38.729554533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 04:54:38.740155 env[1306]: time="2025-10-29T04:54:38.729632979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 04:54:38.740155 env[1306]: time="2025-10-29T04:54:38.729651655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 04:54:38.740155 env[1306]: time="2025-10-29T04:54:38.729897093Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e pid=3902 runtime=io.containerd.runc.v2 Oct 29 04:54:38.767277 env[1306]: time="2025-10-29T04:54:38.767216498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tptz2,Uid:de4b152a-29bb-4b0c-a12c-2eda92dd0564,Namespace:calico-system,Attempt:1,} returns sandbox id \"0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2\"" Oct 29 04:54:38.787398 env[1306]: time="2025-10-29T04:54:38.787338302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 04:54:38.814000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.814000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.814000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.814000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.814000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.814000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.814000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.814000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.814000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.814000 audit: BPF prog-id=16 op=LOAD Oct 29 04:54:38.814000 audit[3843]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff42e4c1c0 a2=94 a3=1 items=0 ppid=3576 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.814000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 04:54:38.815000 audit: BPF prog-id=16 op=UNLOAD Oct 29 04:54:38.815000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.815000 audit[3843]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff42e4c290 a2=50 a3=7fff42e4c370 items=0 ppid=3576 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.815000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 04:54:38.831000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.831000 audit[3843]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff42e4c1d0 a2=28 a3=0 items=0 ppid=3576 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.831000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 04:54:38.831000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.831000 audit[3843]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff42e4c200 a2=28 a3=0 items=0 ppid=3576 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.831000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 04:54:38.831000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.831000 audit[3843]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff42e4c110 a2=28 a3=0 items=0 ppid=3576 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.831000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 04:54:38.831000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.831000 audit[3843]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff42e4c220 a2=28 a3=0 items=0 ppid=3576 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.831000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 04:54:38.831000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.831000 audit[3843]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff42e4c200 a2=28 a3=0 items=0 ppid=3576 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.831000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 04:54:38.831000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.831000 audit[3843]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff42e4c1f0 a2=28 a3=0 items=0 ppid=3576 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.831000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 04:54:38.831000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.831000 audit[3843]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff42e4c220 a2=28 a3=0 items=0 ppid=3576 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.831000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 04:54:38.831000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.831000 audit[3843]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff42e4c200 a2=28 a3=0 items=0 ppid=3576 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.831000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 04:54:38.831000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.831000 audit[3843]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff42e4c220 a2=28 a3=0 items=0 ppid=3576 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.831000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 04:54:38.831000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.831000 audit[3843]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff42e4c1f0 a2=28 a3=0 items=0 ppid=3576 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.831000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 04:54:38.831000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.831000 audit[3843]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff42e4c260 a2=28 a3=0 items=0 ppid=3576 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.831000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 04:54:38.831000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.831000 audit[3843]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff42e4c010 a2=50 a3=1 items=0 ppid=3576 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.831000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 04:54:38.831000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.831000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.831000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.831000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.831000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.831000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.831000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.831000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.831000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.831000 audit: BPF prog-id=17 op=LOAD Oct 29 04:54:38.831000 audit[3843]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff42e4c010 a2=94 a3=5 items=0 ppid=3576 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.831000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 04:54:38.831000 audit: BPF prog-id=17 op=UNLOAD Oct 29 04:54:38.831000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.831000 audit[3843]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff42e4c0c0 a2=50 a3=1 items=0 ppid=3576 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.831000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 04:54:38.831000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.831000 audit[3843]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fff42e4c1e0 a2=4 a3=38 items=0 ppid=3576 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.831000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { confidentiality } for pid=3843 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 29 04:54:38.832000 audit[3843]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff42e4c230 a2=94 a3=6 items=0 ppid=3576 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.832000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { confidentiality } for pid=3843 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 29 04:54:38.832000 audit[3843]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff42e4b9e0 a2=94 a3=88 items=0 ppid=3576 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.832000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { perfmon } for pid=3843 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { bpf } for pid=3843 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.832000 audit[3843]: AVC avc: denied { confidentiality } for pid=3843 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 29 04:54:38.832000 audit[3843]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff42e4b9e0 a2=94 a3=88 items=0 ppid=3576 pid=3843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.832000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 29 04:54:38.846294 kubelet[2172]: E1029 04:54:38.845215 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4pd8r" podUID="4d534185-a9e7-4c27-807c-917c6d4b755f" Oct 29 04:54:38.868000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.868000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.868000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.868000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.868000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.868000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.868000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.868000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.868000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.868000 audit: BPF prog-id=18 op=LOAD Oct 29 04:54:38.868000 audit[3943]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff21cec80 a2=98 a3=1999999999999999 items=0 ppid=3576 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.868000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Oct 29 04:54:38.871000 audit: BPF prog-id=18 op=UNLOAD Oct 29 04:54:38.871000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.871000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.871000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.871000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.871000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.871000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.871000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.871000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.871000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.871000 audit: BPF prog-id=19 op=LOAD Oct 29 04:54:38.871000 audit[3943]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff21ceb60 a2=94 a3=ffff items=0 ppid=3576 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.871000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Oct 29 04:54:38.872000 audit: BPF prog-id=19 op=UNLOAD Oct 29 04:54:38.872000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.872000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.872000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.872000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.872000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.872000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.872000 audit[3943]: AVC avc: denied { perfmon } for pid=3943 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.872000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.872000 audit[3943]: AVC avc: denied { bpf } for pid=3943 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:38.872000 audit: BPF prog-id=20 op=LOAD Oct 29 04:54:38.872000 audit[3943]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffff21ceba0 a2=94 a3=7ffff21ced80 items=0 ppid=3576 pid=3943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:38.872000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Oct 29 04:54:38.874000 audit: BPF prog-id=20 op=UNLOAD Oct 29 04:54:38.879692 env[1306]: time="2025-10-29T04:54:38.879558984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-678d6449b5-m8bcr,Uid:b4da2a97-feea-487c-8384-a94163380e6f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e\"" Oct 29 04:54:38.984538 systemd-networkd[1070]: vxlan.calico: Link UP Oct 29 04:54:38.984548 systemd-networkd[1070]: vxlan.calico: Gained carrier Oct 29 04:54:39.041000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.041000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.041000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.041000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.041000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.041000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.041000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.041000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.041000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.041000 audit: BPF prog-id=21 op=LOAD Oct 29 04:54:39.041000 audit[3971]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeab972a40 a2=98 a3=20 items=0 ppid=3576 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.041000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 04:54:39.045000 audit: BPF prog-id=21 op=UNLOAD Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit: BPF prog-id=22 op=LOAD Oct 29 04:54:39.046000 audit[3971]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeab972850 a2=94 a3=54428f items=0 ppid=3576 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.046000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 04:54:39.046000 audit: BPF prog-id=22 op=UNLOAD Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit: BPF prog-id=23 op=LOAD Oct 29 04:54:39.046000 audit[3971]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeab972880 a2=94 a3=2 items=0 ppid=3576 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.046000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 04:54:39.046000 audit: BPF prog-id=23 op=UNLOAD Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffeab972750 a2=28 a3=0 items=0 ppid=3576 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.046000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeab972780 a2=28 a3=0 items=0 ppid=3576 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.046000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeab972690 a2=28 a3=0 items=0 ppid=3576 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.046000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffeab9727a0 a2=28 a3=0 items=0 ppid=3576 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.046000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffeab972780 a2=28 a3=0 items=0 ppid=3576 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.046000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffeab972770 a2=28 a3=0 items=0 ppid=3576 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.046000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffeab9727a0 a2=28 a3=0 items=0 ppid=3576 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.046000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeab972780 a2=28 a3=0 items=0 ppid=3576 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.046000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeab9727a0 a2=28 a3=0 items=0 ppid=3576 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.046000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeab972770 a2=28 a3=0 items=0 ppid=3576 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.046000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffeab9727e0 a2=28 a3=0 items=0 ppid=3576 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.046000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.046000 audit: BPF prog-id=24 op=LOAD Oct 29 04:54:39.046000 audit[3971]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeab972650 a2=94 a3=0 items=0 ppid=3576 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.046000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 04:54:39.047000 audit: BPF prog-id=24 op=UNLOAD Oct 29 04:54:39.050000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.050000 audit[3971]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffeab972640 a2=50 a3=2800 items=0 ppid=3576 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.050000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 04:54:39.050000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.050000 audit[3971]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffeab972640 a2=50 a3=2800 items=0 ppid=3576 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.050000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 04:54:39.052000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.052000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.052000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.052000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.052000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.052000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.052000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.052000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.052000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.052000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.052000 audit: BPF prog-id=25 op=LOAD Oct 29 04:54:39.052000 audit[3971]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeab971e60 a2=94 a3=2 items=0 ppid=3576 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.052000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 04:54:39.052000 audit: BPF prog-id=25 op=UNLOAD Oct 29 04:54:39.052000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.052000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.052000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.052000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.052000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.052000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.052000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.052000 audit[3971]: AVC avc: denied { perfmon } for pid=3971 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.052000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.052000 audit[3971]: AVC avc: denied { bpf } for pid=3971 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.052000 audit: BPF prog-id=26 op=LOAD Oct 29 04:54:39.052000 audit[3971]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeab971f60 a2=94 a3=30 items=0 ppid=3576 pid=3971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.052000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 29 04:54:39.056000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.056000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.056000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.056000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.056000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.056000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.056000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.056000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.056000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.056000 audit: BPF prog-id=27 op=LOAD Oct 29 04:54:39.056000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffef40a9900 a2=98 a3=0 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.056000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.056000 audit: BPF prog-id=27 op=UNLOAD Oct 29 04:54:39.057000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.057000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.057000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.057000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.057000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.057000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.057000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.057000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.057000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.057000 audit: BPF prog-id=28 op=LOAD Oct 29 04:54:39.057000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffef40a96f0 a2=94 a3=54428f items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.057000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.057000 audit: BPF prog-id=28 op=UNLOAD Oct 29 04:54:39.057000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.057000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.057000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.057000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.057000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.057000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.057000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.057000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.057000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.057000 audit: BPF prog-id=29 op=LOAD Oct 29 04:54:39.057000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffef40a9720 a2=94 a3=2 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.057000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.057000 audit: BPF prog-id=29 op=UNLOAD Oct 29 04:54:39.127623 env[1306]: time="2025-10-29T04:54:39.126553285Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:54:39.130426 env[1306]: time="2025-10-29T04:54:39.128625580Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 04:54:39.131224 kubelet[2172]: E1029 04:54:39.131134 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 04:54:39.131331 kubelet[2172]: E1029 04:54:39.131272 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 04:54:39.132200 kubelet[2172]: E1029 04:54:39.132132 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdjn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tptz2_calico-system(de4b152a-29bb-4b0c-a12c-2eda92dd0564): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 04:54:39.140893 env[1306]: time="2025-10-29T04:54:39.140618612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 04:54:39.307000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.307000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.307000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.307000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.307000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.307000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.307000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.307000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.307000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.307000 audit: BPF prog-id=30 op=LOAD Oct 29 04:54:39.307000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffef40a95e0 a2=94 a3=1 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.307000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.307000 audit: BPF prog-id=30 op=UNLOAD Oct 29 04:54:39.307000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.307000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffef40a96b0 a2=50 a3=7ffef40a9790 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.307000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.321000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.321000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef40a95f0 a2=28 a3=0 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.321000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.321000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.321000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef40a9620 a2=28 a3=0 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.321000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.321000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.321000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef40a9530 a2=28 a3=0 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.321000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.321000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.321000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef40a9640 a2=28 a3=0 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.321000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.321000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.321000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef40a9620 a2=28 a3=0 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.321000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.321000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.321000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef40a9610 a2=28 a3=0 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.321000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.321000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.321000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef40a9640 a2=28 a3=0 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.321000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.321000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.321000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef40a9620 a2=28 a3=0 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.321000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.321000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.321000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef40a9640 a2=28 a3=0 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.321000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.321000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.321000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef40a9610 a2=28 a3=0 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.321000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.321000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.321000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef40a9680 a2=28 a3=0 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.321000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffef40a9430 a2=50 a3=1 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.322000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit: BPF prog-id=31 op=LOAD Oct 29 04:54:39.322000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffef40a9430 a2=94 a3=5 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.322000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.322000 audit: BPF prog-id=31 op=UNLOAD Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffef40a94e0 a2=50 a3=1 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.322000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffef40a9600 a2=4 a3=38 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.322000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { confidentiality } for pid=3973 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 29 04:54:39.322000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffef40a9650 a2=94 a3=6 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.322000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.322000 audit[3973]: AVC avc: denied { confidentiality } for pid=3973 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 29 04:54:39.322000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffef40a8e00 a2=94 a3=88 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.322000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.323000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.323000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.323000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.323000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.323000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.323000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.323000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.323000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.323000 audit[3973]: AVC avc: denied { perfmon } for pid=3973 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.323000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.323000 audit[3973]: AVC avc: denied { confidentiality } for pid=3973 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 29 04:54:39.323000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffef40a8e00 a2=94 a3=88 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.323000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.325000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.325000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffef40aa830 a2=10 a3=f8f00800 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.325000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.325000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.325000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffef40aa6d0 a2=10 a3=3 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.325000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.325000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.325000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffef40aa670 a2=10 a3=3 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.325000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.328000 audit[3973]: AVC avc: denied { bpf } for pid=3973 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 29 04:54:39.328000 audit[3973]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffef40aa670 a2=10 a3=7 items=0 ppid=3576 pid=3973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.328000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 29 04:54:39.341000 audit: BPF prog-id=26 op=UNLOAD Oct 29 04:54:39.500412 env[1306]: time="2025-10-29T04:54:39.500297470Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:54:39.501640 env[1306]: time="2025-10-29T04:54:39.501563275Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 04:54:39.502070 kubelet[2172]: E1029 04:54:39.501999 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 04:54:39.502192 kubelet[2172]: E1029 04:54:39.502080 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 04:54:39.502856 env[1306]: time="2025-10-29T04:54:39.502774673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 04:54:39.504147 kubelet[2172]: E1029 04:54:39.503781 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nm8fq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-678d6449b5-m8bcr_calico-apiserver(b4da2a97-feea-487c-8384-a94163380e6f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 04:54:39.505439 kubelet[2172]: E1029 04:54:39.505304 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-678d6449b5-m8bcr" podUID="b4da2a97-feea-487c-8384-a94163380e6f" Oct 29 04:54:39.522000 audit[4013]: NETFILTER_CFG table=mangle:109 family=2 entries=16 op=nft_register_chain pid=4013 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 29 04:54:39.522000 audit[4013]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffdffbe9b70 a2=0 a3=7ffdffbe9b5c items=0 ppid=3576 pid=4013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.522000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 29 04:54:39.536000 audit[4012]: NETFILTER_CFG table=raw:110 family=2 entries=21 op=nft_register_chain pid=4012 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 29 04:54:39.536000 audit[4012]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd0b61cec0 a2=0 a3=7ffd0b61ceac items=0 ppid=3576 pid=4012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.536000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 29 04:54:39.539000 audit[4015]: NETFILTER_CFG table=nat:111 family=2 entries=15 op=nft_register_chain pid=4015 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 29 04:54:39.539000 audit[4015]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffefbcb2810 a2=0 a3=7ffefbcb27fc items=0 ppid=3576 pid=4015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.539000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 29 04:54:39.559000 audit[4017]: NETFILTER_CFG table=filter:112 family=2 entries=200 op=nft_register_chain pid=4017 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 29 04:54:39.559000 audit[4017]: SYSCALL arch=c000003e syscall=46 success=yes exit=117380 a0=3 a1=7ffc4d246050 a2=0 a3=7ffc4d24603c items=0 ppid=3576 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.559000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 29 04:54:39.713660 systemd-networkd[1070]: calic754e2ae9ed: Gained IPv6LL Oct 29 04:54:39.837728 env[1306]: time="2025-10-29T04:54:39.837437299Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:54:39.840385 env[1306]: time="2025-10-29T04:54:39.840266137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 04:54:39.841066 kubelet[2172]: E1029 04:54:39.840968 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 04:54:39.841278 kubelet[2172]: E1029 04:54:39.841225 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 04:54:39.841828 kubelet[2172]: E1029 04:54:39.841725 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdjn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tptz2_calico-system(de4b152a-29bb-4b0c-a12c-2eda92dd0564): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 04:54:39.843709 kubelet[2172]: E1029 04:54:39.843657 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tptz2" podUID="de4b152a-29bb-4b0c-a12c-2eda92dd0564" Oct 29 04:54:39.852455 kubelet[2172]: E1029 04:54:39.851587 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-678d6449b5-m8bcr" podUID="b4da2a97-feea-487c-8384-a94163380e6f" Oct 29 04:54:39.935000 audit[4031]: NETFILTER_CFG table=filter:113 family=2 entries=20 op=nft_register_rule pid=4031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:39.935000 audit[4031]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdf05470d0 a2=0 a3=7ffdf05470bc items=0 ppid=2278 pid=4031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.935000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:39.939000 audit[4031]: NETFILTER_CFG table=nat:114 family=2 entries=14 op=nft_register_rule pid=4031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:39.939000 audit[4031]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdf05470d0 a2=0 a3=0 items=0 ppid=2278 pid=4031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:39.939000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:39.969691 systemd-networkd[1070]: califd213a9c53e: Gained IPv6LL Oct 29 04:54:40.353650 systemd-networkd[1070]: vxlan.calico: Gained IPv6LL Oct 29 04:54:40.857346 kubelet[2172]: E1029 04:54:40.855847 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tptz2" podUID="de4b152a-29bb-4b0c-a12c-2eda92dd0564" Oct 29 04:54:40.857346 kubelet[2172]: E1029 04:54:40.855982 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-678d6449b5-m8bcr" podUID="b4da2a97-feea-487c-8384-a94163380e6f" Oct 29 04:54:45.466412 env[1306]: time="2025-10-29T04:54:45.466288813Z" level=info msg="StopPodSandbox for \"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e\"" Oct 29 04:54:45.488107 env[1306]: time="2025-10-29T04:54:45.488035761Z" level=info msg="StopPodSandbox for \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\"" Oct 29 04:54:45.677987 env[1306]: 2025-10-29 04:54:45.556 [WARNING][4050] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0", GenerateName:"calico-apiserver-678d6449b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"b4da2a97-feea-487c-8384-a94163380e6f", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 54, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"678d6449b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e", Pod:"calico-apiserver-678d6449b5-m8bcr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic754e2ae9ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:54:45.677987 env[1306]: 2025-10-29 04:54:45.557 [INFO][4050] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" Oct 29 04:54:45.677987 env[1306]: 2025-10-29 04:54:45.557 [INFO][4050] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" iface="eth0" netns="" Oct 29 04:54:45.677987 env[1306]: 2025-10-29 04:54:45.557 [INFO][4050] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" Oct 29 04:54:45.677987 env[1306]: 2025-10-29 04:54:45.557 [INFO][4050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" Oct 29 04:54:45.677987 env[1306]: 2025-10-29 04:54:45.659 [INFO][4073] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" HandleID="k8s-pod-network.e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0" Oct 29 04:54:45.677987 env[1306]: 2025-10-29 04:54:45.659 [INFO][4073] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:45.677987 env[1306]: 2025-10-29 04:54:45.659 [INFO][4073] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:45.677987 env[1306]: 2025-10-29 04:54:45.672 [WARNING][4073] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" HandleID="k8s-pod-network.e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0" Oct 29 04:54:45.677987 env[1306]: 2025-10-29 04:54:45.672 [INFO][4073] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" HandleID="k8s-pod-network.e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0" Oct 29 04:54:45.677987 env[1306]: 2025-10-29 04:54:45.674 [INFO][4073] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:45.677987 env[1306]: 2025-10-29 04:54:45.676 [INFO][4050] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" Oct 29 04:54:45.679266 env[1306]: time="2025-10-29T04:54:45.678033125Z" level=info msg="TearDown network for sandbox \"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e\" successfully" Oct 29 04:54:45.679266 env[1306]: time="2025-10-29T04:54:45.678084141Z" level=info msg="StopPodSandbox for \"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e\" returns successfully" Oct 29 04:54:45.679935 env[1306]: time="2025-10-29T04:54:45.679886570Z" level=info msg="RemovePodSandbox for \"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e\"" Oct 29 04:54:45.680281 env[1306]: time="2025-10-29T04:54:45.680181523Z" level=info msg="Forcibly stopping sandbox \"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e\"" Oct 29 04:54:45.706915 env[1306]: 2025-10-29 04:54:45.606 [INFO][4066] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Oct 29 04:54:45.706915 env[1306]: 2025-10-29 04:54:45.606 [INFO][4066] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" iface="eth0" netns="/var/run/netns/cni-1eea6494-d025-0e57-b444-e35a0c0e4d1b" Oct 29 04:54:45.706915 env[1306]: 2025-10-29 04:54:45.607 [INFO][4066] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" iface="eth0" netns="/var/run/netns/cni-1eea6494-d025-0e57-b444-e35a0c0e4d1b" Oct 29 04:54:45.706915 env[1306]: 2025-10-29 04:54:45.608 [INFO][4066] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" iface="eth0" netns="/var/run/netns/cni-1eea6494-d025-0e57-b444-e35a0c0e4d1b" Oct 29 04:54:45.706915 env[1306]: 2025-10-29 04:54:45.612 [INFO][4066] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Oct 29 04:54:45.706915 env[1306]: 2025-10-29 04:54:45.612 [INFO][4066] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Oct 29 04:54:45.706915 env[1306]: 2025-10-29 04:54:45.689 [INFO][4080] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" HandleID="k8s-pod-network.7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0" Oct 29 04:54:45.706915 env[1306]: 2025-10-29 04:54:45.690 [INFO][4080] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:45.706915 env[1306]: 2025-10-29 04:54:45.690 [INFO][4080] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:45.706915 env[1306]: 2025-10-29 04:54:45.700 [WARNING][4080] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" HandleID="k8s-pod-network.7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0" Oct 29 04:54:45.706915 env[1306]: 2025-10-29 04:54:45.700 [INFO][4080] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" HandleID="k8s-pod-network.7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0" Oct 29 04:54:45.706915 env[1306]: 2025-10-29 04:54:45.702 [INFO][4080] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:45.706915 env[1306]: 2025-10-29 04:54:45.704 [INFO][4066] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Oct 29 04:54:45.712506 systemd[1]: run-netns-cni\x2d1eea6494\x2dd025\x2d0e57\x2db444\x2de35a0c0e4d1b.mount: Deactivated successfully. Oct 29 04:54:45.714081 env[1306]: time="2025-10-29T04:54:45.714027352Z" level=info msg="TearDown network for sandbox \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\" successfully" Oct 29 04:54:45.714239 env[1306]: time="2025-10-29T04:54:45.714203986Z" level=info msg="StopPodSandbox for \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\" returns successfully" Oct 29 04:54:45.718033 env[1306]: time="2025-10-29T04:54:45.716784236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s5c5l,Uid:14f0fbf4-8cf4-45f3-bb19-eb68dd03b78e,Namespace:kube-system,Attempt:1,}" Oct 29 04:54:45.870985 env[1306]: 2025-10-29 04:54:45.796 [WARNING][4097] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0", GenerateName:"calico-apiserver-678d6449b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"b4da2a97-feea-487c-8384-a94163380e6f", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 54, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"678d6449b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"3a38b68c55f22c8b2dcbb29e9e4461c7de82f82035fd47cbdac155e686f8197e", Pod:"calico-apiserver-678d6449b5-m8bcr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic754e2ae9ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:54:45.870985 env[1306]: 2025-10-29 04:54:45.797 [INFO][4097] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" Oct 29 04:54:45.870985 env[1306]: 2025-10-29 04:54:45.797 [INFO][4097] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" iface="eth0" netns="" Oct 29 04:54:45.870985 env[1306]: 2025-10-29 04:54:45.797 [INFO][4097] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" Oct 29 04:54:45.870985 env[1306]: 2025-10-29 04:54:45.797 [INFO][4097] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" Oct 29 04:54:45.870985 env[1306]: 2025-10-29 04:54:45.850 [INFO][4116] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" HandleID="k8s-pod-network.e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0" Oct 29 04:54:45.870985 env[1306]: 2025-10-29 04:54:45.851 [INFO][4116] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:45.870985 env[1306]: 2025-10-29 04:54:45.851 [INFO][4116] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:45.870985 env[1306]: 2025-10-29 04:54:45.863 [WARNING][4116] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" HandleID="k8s-pod-network.e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0" Oct 29 04:54:45.870985 env[1306]: 2025-10-29 04:54:45.863 [INFO][4116] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" HandleID="k8s-pod-network.e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--m8bcr-eth0" Oct 29 04:54:45.870985 env[1306]: 2025-10-29 04:54:45.866 [INFO][4116] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:45.870985 env[1306]: 2025-10-29 04:54:45.868 [INFO][4097] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e" Oct 29 04:54:45.872005 env[1306]: time="2025-10-29T04:54:45.870991292Z" level=info msg="TearDown network for sandbox \"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e\" successfully" Oct 29 04:54:45.874401 env[1306]: time="2025-10-29T04:54:45.874347440Z" level=info msg="RemovePodSandbox \"e11c7668488fb15cd0acf45b3544935ccfe9dcdf01d48315b99d897214d0891e\" returns successfully" Oct 29 04:54:45.875315 env[1306]: time="2025-10-29T04:54:45.875266392Z" level=info msg="StopPodSandbox for \"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4\"" Oct 29 04:54:45.950549 systemd-networkd[1070]: calie75e69b7c7f: Link UP Oct 29 04:54:45.959627 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 29 04:54:45.959798 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie75e69b7c7f: link becomes ready Oct 29 04:54:45.961522 systemd-networkd[1070]: calie75e69b7c7f: Gained carrier Oct 29 04:54:46.002242 env[1306]: 2025-10-29 04:54:45.814 [INFO][4101] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0 coredns-668d6bf9bc- kube-system 14f0fbf4-8cf4-45f3-bb19-eb68dd03b78e 1039 0 2025-10-29 04:53:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-xtjva.gb1.brightbox.com coredns-668d6bf9bc-s5c5l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie75e69b7c7f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e" Namespace="kube-system" Pod="coredns-668d6bf9bc-s5c5l" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-" Oct 29 04:54:46.002242 env[1306]: 2025-10-29 04:54:45.814 [INFO][4101] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e" Namespace="kube-system" Pod="coredns-668d6bf9bc-s5c5l" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0" Oct 29 04:54:46.002242 env[1306]: 2025-10-29 04:54:45.887 [INFO][4122] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e" HandleID="k8s-pod-network.63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0" Oct 29 04:54:46.002242 env[1306]: 2025-10-29 04:54:45.888 [INFO][4122] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e" HandleID="k8s-pod-network.63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b73a0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-xtjva.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-s5c5l", "timestamp":"2025-10-29 04:54:45.887777259 +0000 UTC"}, Hostname:"srv-xtjva.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 04:54:46.002242 env[1306]: 2025-10-29 04:54:45.888 [INFO][4122] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:46.002242 env[1306]: 2025-10-29 04:54:45.888 [INFO][4122] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:46.002242 env[1306]: 2025-10-29 04:54:45.888 [INFO][4122] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-xtjva.gb1.brightbox.com' Oct 29 04:54:46.002242 env[1306]: 2025-10-29 04:54:45.898 [INFO][4122] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:46.002242 env[1306]: 2025-10-29 04:54:45.905 [INFO][4122] ipam/ipam.go 394: Looking up existing affinities for host host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:46.002242 env[1306]: 2025-10-29 04:54:45.912 [INFO][4122] ipam/ipam.go 511: Trying affinity for 192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:46.002242 env[1306]: 2025-10-29 04:54:45.917 [INFO][4122] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:46.002242 env[1306]: 2025-10-29 04:54:45.920 [INFO][4122] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:46.002242 env[1306]: 2025-10-29 04:54:45.920 [INFO][4122] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:46.002242 env[1306]: 2025-10-29 04:54:45.924 [INFO][4122] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e Oct 29 04:54:46.002242 env[1306]: 2025-10-29 04:54:45.930 [INFO][4122] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:46.002242 env[1306]: 2025-10-29 04:54:45.941 [INFO][4122] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.31.133/26] block=192.168.31.128/26 handle="k8s-pod-network.63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:46.002242 env[1306]: 2025-10-29 04:54:45.942 [INFO][4122] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.133/26] handle="k8s-pod-network.63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:46.002242 env[1306]: 2025-10-29 04:54:45.942 [INFO][4122] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:46.002242 env[1306]: 2025-10-29 04:54:45.942 [INFO][4122] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.31.133/26] IPv6=[] ContainerID="63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e" HandleID="k8s-pod-network.63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0" Oct 29 04:54:46.003759 env[1306]: 2025-10-29 04:54:45.945 [INFO][4101] cni-plugin/k8s.go 418: Populated endpoint ContainerID="63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e" Namespace="kube-system" Pod="coredns-668d6bf9bc-s5c5l" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"14f0fbf4-8cf4-45f3-bb19-eb68dd03b78e", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-s5c5l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie75e69b7c7f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:54:46.003759 env[1306]: 2025-10-29 04:54:45.945 [INFO][4101] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.133/32] ContainerID="63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e" Namespace="kube-system" Pod="coredns-668d6bf9bc-s5c5l" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0" Oct 29 04:54:46.003759 env[1306]: 2025-10-29 04:54:45.945 [INFO][4101] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie75e69b7c7f ContainerID="63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e" Namespace="kube-system" Pod="coredns-668d6bf9bc-s5c5l" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0" Oct 29 04:54:46.003759 env[1306]: 2025-10-29 04:54:45.963 [INFO][4101] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e" Namespace="kube-system" Pod="coredns-668d6bf9bc-s5c5l" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0" Oct 29 04:54:46.003759 env[1306]: 2025-10-29 04:54:45.964 [INFO][4101] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e" Namespace="kube-system" Pod="coredns-668d6bf9bc-s5c5l" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"14f0fbf4-8cf4-45f3-bb19-eb68dd03b78e", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e", Pod:"coredns-668d6bf9bc-s5c5l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie75e69b7c7f", MAC:"4e:7a:29:44:f6:4b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:54:46.003759 env[1306]: 2025-10-29 04:54:45.984 [INFO][4101] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e" Namespace="kube-system" Pod="coredns-668d6bf9bc-s5c5l" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0" Oct 29 04:54:46.081643 env[1306]: time="2025-10-29T04:54:46.081481880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 04:54:46.081864 env[1306]: time="2025-10-29T04:54:46.081672756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 04:54:46.081864 env[1306]: time="2025-10-29T04:54:46.081746216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 04:54:46.082669 env[1306]: time="2025-10-29T04:54:46.082527062Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e pid=4164 runtime=io.containerd.runc.v2 Oct 29 04:54:46.121000 audit[4185]: NETFILTER_CFG table=filter:115 family=2 entries=60 op=nft_register_chain pid=4185 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 29 04:54:46.123864 kernel: kauditd_printk_skb: 565 callbacks suppressed Oct 29 04:54:46.124000 kernel: audit: type=1325 audit(1761713686.121:428): table=filter:115 family=2 entries=60 op=nft_register_chain pid=4185 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 29 04:54:46.121000 audit[4185]: SYSCALL arch=c000003e syscall=46 success=yes exit=28968 a0=3 a1=7fff945ba350 a2=0 a3=7fff945ba33c items=0 ppid=3576 pid=4185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:46.134982 kernel: audit: type=1300 audit(1761713686.121:428): arch=c000003e syscall=46 success=yes exit=28968 a0=3 a1=7fff945ba350 a2=0 a3=7fff945ba33c items=0 ppid=3576 pid=4185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:46.145924 kernel: audit: type=1327 audit(1761713686.121:428): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 29 04:54:46.121000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 29 04:54:46.199821 env[1306]: 2025-10-29 04:54:46.070 [WARNING][4138] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"de4b152a-29bb-4b0c-a12c-2eda92dd0564", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2", Pod:"csi-node-driver-tptz2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califd213a9c53e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:54:46.199821 env[1306]: 2025-10-29 04:54:46.071 [INFO][4138] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" Oct 29 04:54:46.199821 env[1306]: 2025-10-29 04:54:46.071 [INFO][4138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" iface="eth0" netns="" Oct 29 04:54:46.199821 env[1306]: 2025-10-29 04:54:46.071 [INFO][4138] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" Oct 29 04:54:46.199821 env[1306]: 2025-10-29 04:54:46.071 [INFO][4138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" Oct 29 04:54:46.199821 env[1306]: 2025-10-29 04:54:46.160 [INFO][4168] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" HandleID="k8s-pod-network.739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" Workload="srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0" Oct 29 04:54:46.199821 env[1306]: 2025-10-29 04:54:46.161 [INFO][4168] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:46.199821 env[1306]: 2025-10-29 04:54:46.161 [INFO][4168] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:46.199821 env[1306]: 2025-10-29 04:54:46.192 [WARNING][4168] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" HandleID="k8s-pod-network.739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" Workload="srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0" Oct 29 04:54:46.199821 env[1306]: 2025-10-29 04:54:46.192 [INFO][4168] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" HandleID="k8s-pod-network.739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" Workload="srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0" Oct 29 04:54:46.199821 env[1306]: 2025-10-29 04:54:46.195 [INFO][4168] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:46.199821 env[1306]: 2025-10-29 04:54:46.197 [INFO][4138] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" Oct 29 04:54:46.200983 env[1306]: time="2025-10-29T04:54:46.200932281Z" level=info msg="TearDown network for sandbox \"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4\" successfully" Oct 29 04:54:46.201210 env[1306]: time="2025-10-29T04:54:46.201149187Z" level=info msg="StopPodSandbox for \"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4\" returns successfully" Oct 29 04:54:46.202444 env[1306]: time="2025-10-29T04:54:46.202336531Z" level=info msg="RemovePodSandbox for \"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4\"" Oct 29 04:54:46.202641 env[1306]: time="2025-10-29T04:54:46.202576933Z" level=info msg="Forcibly stopping sandbox \"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4\"" Oct 29 04:54:46.272550 env[1306]: time="2025-10-29T04:54:46.272253543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s5c5l,Uid:14f0fbf4-8cf4-45f3-bb19-eb68dd03b78e,Namespace:kube-system,Attempt:1,} returns sandbox id \"63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e\"" Oct 29 04:54:46.287946 env[1306]: time="2025-10-29T04:54:46.287881014Z" level=info msg="CreateContainer within sandbox \"63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 04:54:46.307526 env[1306]: time="2025-10-29T04:54:46.307409106Z" level=info msg="CreateContainer within sandbox \"63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0660ce2915ffdc51d4ff47d6795ad866937e72a160131deee25b6a09b96f1120\"" Oct 29 04:54:46.308664 env[1306]: time="2025-10-29T04:54:46.308626575Z" level=info msg="StartContainer for \"0660ce2915ffdc51d4ff47d6795ad866937e72a160131deee25b6a09b96f1120\"" Oct 29 04:54:46.378131 env[1306]: 2025-10-29 04:54:46.290 [WARNING][4212] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"de4b152a-29bb-4b0c-a12c-2eda92dd0564", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"0c7169337d9aaa9f2d903dae1dffadd17efb5852ed60c38372409adac6aec3e2", Pod:"csi-node-driver-tptz2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.31.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califd213a9c53e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:54:46.378131 env[1306]: 2025-10-29 04:54:46.290 [INFO][4212] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" Oct 29 04:54:46.378131 env[1306]: 2025-10-29 04:54:46.290 [INFO][4212] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" iface="eth0" netns="" Oct 29 04:54:46.378131 env[1306]: 2025-10-29 04:54:46.290 [INFO][4212] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" Oct 29 04:54:46.378131 env[1306]: 2025-10-29 04:54:46.290 [INFO][4212] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" Oct 29 04:54:46.378131 env[1306]: 2025-10-29 04:54:46.342 [INFO][4225] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" HandleID="k8s-pod-network.739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" Workload="srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0" Oct 29 04:54:46.378131 env[1306]: 2025-10-29 04:54:46.343 [INFO][4225] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:46.378131 env[1306]: 2025-10-29 04:54:46.343 [INFO][4225] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:46.378131 env[1306]: 2025-10-29 04:54:46.364 [WARNING][4225] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" HandleID="k8s-pod-network.739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" Workload="srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0" Oct 29 04:54:46.378131 env[1306]: 2025-10-29 04:54:46.364 [INFO][4225] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" HandleID="k8s-pod-network.739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" Workload="srv--xtjva.gb1.brightbox.com-k8s-csi--node--driver--tptz2-eth0" Oct 29 04:54:46.378131 env[1306]: 2025-10-29 04:54:46.367 [INFO][4225] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:46.378131 env[1306]: 2025-10-29 04:54:46.372 [INFO][4212] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4" Oct 29 04:54:46.378131 env[1306]: time="2025-10-29T04:54:46.377716452Z" level=info msg="TearDown network for sandbox \"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4\" successfully" Oct 29 04:54:46.381961 env[1306]: time="2025-10-29T04:54:46.381703937Z" level=info msg="RemovePodSandbox \"739eaced309940c6bcca32070013c922e13d526985b964194accbff09d5808c4\" returns successfully" Oct 29 04:54:46.384252 env[1306]: time="2025-10-29T04:54:46.382661899Z" level=info msg="StopPodSandbox for \"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5\"" Oct 29 04:54:46.436109 env[1306]: time="2025-10-29T04:54:46.436047649Z" level=info msg="StartContainer for \"0660ce2915ffdc51d4ff47d6795ad866937e72a160131deee25b6a09b96f1120\" returns successfully" Oct 29 04:54:46.486843 env[1306]: time="2025-10-29T04:54:46.486785817Z" level=info msg="StopPodSandbox for \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\"" Oct 29 04:54:46.488532 env[1306]: time="2025-10-29T04:54:46.488496073Z" level=info msg="StopPodSandbox for \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\"" Oct 29 04:54:46.559208 env[1306]: 2025-10-29 04:54:46.466 [WARNING][4267] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-whisker--65797c84c4--zrdnx-eth0" Oct 29 04:54:46.559208 env[1306]: 2025-10-29 04:54:46.466 [INFO][4267] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" Oct 29 04:54:46.559208 env[1306]: 2025-10-29 04:54:46.466 [INFO][4267] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" iface="eth0" netns="" Oct 29 04:54:46.559208 env[1306]: 2025-10-29 04:54:46.466 [INFO][4267] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" Oct 29 04:54:46.559208 env[1306]: 2025-10-29 04:54:46.466 [INFO][4267] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" Oct 29 04:54:46.559208 env[1306]: 2025-10-29 04:54:46.530 [INFO][4283] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" HandleID="k8s-pod-network.0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" Workload="srv--xtjva.gb1.brightbox.com-k8s-whisker--65797c84c4--zrdnx-eth0" Oct 29 04:54:46.559208 env[1306]: 2025-10-29 04:54:46.530 [INFO][4283] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:46.559208 env[1306]: 2025-10-29 04:54:46.530 [INFO][4283] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:46.559208 env[1306]: 2025-10-29 04:54:46.541 [WARNING][4283] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" HandleID="k8s-pod-network.0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" Workload="srv--xtjva.gb1.brightbox.com-k8s-whisker--65797c84c4--zrdnx-eth0" Oct 29 04:54:46.559208 env[1306]: 2025-10-29 04:54:46.541 [INFO][4283] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" HandleID="k8s-pod-network.0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" Workload="srv--xtjva.gb1.brightbox.com-k8s-whisker--65797c84c4--zrdnx-eth0" Oct 29 04:54:46.559208 env[1306]: 2025-10-29 04:54:46.546 [INFO][4283] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:46.559208 env[1306]: 2025-10-29 04:54:46.551 [INFO][4267] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" Oct 29 04:54:46.559208 env[1306]: time="2025-10-29T04:54:46.557142858Z" level=info msg="TearDown network for sandbox \"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5\" successfully" Oct 29 04:54:46.559208 env[1306]: time="2025-10-29T04:54:46.557246398Z" level=info msg="StopPodSandbox for \"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5\" returns successfully" Oct 29 04:54:46.559208 env[1306]: time="2025-10-29T04:54:46.558260058Z" level=info msg="RemovePodSandbox for \"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5\"" Oct 29 04:54:46.559208 env[1306]: time="2025-10-29T04:54:46.558331091Z" level=info msg="Forcibly stopping sandbox \"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5\"" Oct 29 04:54:46.713145 systemd[1]: run-containerd-runc-k8s.io-63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e-runc.CeVgpb.mount: Deactivated successfully. Oct 29 04:54:46.844050 env[1306]: 2025-10-29 04:54:46.727 [WARNING][4326] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-whisker--65797c84c4--zrdnx-eth0" Oct 29 04:54:46.844050 env[1306]: 2025-10-29 04:54:46.728 [INFO][4326] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" Oct 29 04:54:46.844050 env[1306]: 2025-10-29 04:54:46.728 [INFO][4326] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" iface="eth0" netns="" Oct 29 04:54:46.844050 env[1306]: 2025-10-29 04:54:46.728 [INFO][4326] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" Oct 29 04:54:46.844050 env[1306]: 2025-10-29 04:54:46.729 [INFO][4326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" Oct 29 04:54:46.844050 env[1306]: 2025-10-29 04:54:46.815 [INFO][4341] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" HandleID="k8s-pod-network.0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" Workload="srv--xtjva.gb1.brightbox.com-k8s-whisker--65797c84c4--zrdnx-eth0" Oct 29 04:54:46.844050 env[1306]: 2025-10-29 04:54:46.815 [INFO][4341] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:46.844050 env[1306]: 2025-10-29 04:54:46.815 [INFO][4341] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:46.844050 env[1306]: 2025-10-29 04:54:46.833 [WARNING][4341] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" HandleID="k8s-pod-network.0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" Workload="srv--xtjva.gb1.brightbox.com-k8s-whisker--65797c84c4--zrdnx-eth0" Oct 29 04:54:46.844050 env[1306]: 2025-10-29 04:54:46.833 [INFO][4341] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" HandleID="k8s-pod-network.0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" Workload="srv--xtjva.gb1.brightbox.com-k8s-whisker--65797c84c4--zrdnx-eth0" Oct 29 04:54:46.844050 env[1306]: 2025-10-29 04:54:46.835 [INFO][4341] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:46.844050 env[1306]: 2025-10-29 04:54:46.841 [INFO][4326] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5" Oct 29 04:54:46.845312 env[1306]: time="2025-10-29T04:54:46.844021503Z" level=info msg="TearDown network for sandbox \"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5\" successfully" Oct 29 04:54:46.848650 env[1306]: time="2025-10-29T04:54:46.848601380Z" level=info msg="RemovePodSandbox \"0cad14101c0f042ae470d9a0879cf62b4eb0bef0b02f71ed48495f9cc020f4f5\" returns successfully" Oct 29 04:54:46.849412 env[1306]: time="2025-10-29T04:54:46.849345333Z" level=info msg="StopPodSandbox for \"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b\"" Oct 29 04:54:46.914590 env[1306]: 2025-10-29 04:54:46.772 [INFO][4304] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Oct 29 04:54:46.914590 env[1306]: 2025-10-29 04:54:46.772 [INFO][4304] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" iface="eth0" netns="/var/run/netns/cni-d5ceaa0a-4ef9-89e0-67f4-b509e9e55928" Oct 29 04:54:46.914590 env[1306]: 2025-10-29 04:54:46.773 [INFO][4304] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" iface="eth0" netns="/var/run/netns/cni-d5ceaa0a-4ef9-89e0-67f4-b509e9e55928" Oct 29 04:54:46.914590 env[1306]: 2025-10-29 04:54:46.774 [INFO][4304] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" iface="eth0" netns="/var/run/netns/cni-d5ceaa0a-4ef9-89e0-67f4-b509e9e55928" Oct 29 04:54:46.914590 env[1306]: 2025-10-29 04:54:46.774 [INFO][4304] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Oct 29 04:54:46.914590 env[1306]: 2025-10-29 04:54:46.774 [INFO][4304] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Oct 29 04:54:46.914590 env[1306]: 2025-10-29 04:54:46.862 [INFO][4348] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" HandleID="k8s-pod-network.52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0" Oct 29 04:54:46.914590 env[1306]: 2025-10-29 04:54:46.862 [INFO][4348] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:46.914590 env[1306]: 2025-10-29 04:54:46.862 [INFO][4348] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:46.914590 env[1306]: 2025-10-29 04:54:46.873 [WARNING][4348] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" HandleID="k8s-pod-network.52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0" Oct 29 04:54:46.914590 env[1306]: 2025-10-29 04:54:46.874 [INFO][4348] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" HandleID="k8s-pod-network.52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0" Oct 29 04:54:46.914590 env[1306]: 2025-10-29 04:54:46.878 [INFO][4348] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:46.914590 env[1306]: 2025-10-29 04:54:46.887 [INFO][4304] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Oct 29 04:54:46.918504 systemd[1]: run-netns-cni\x2dd5ceaa0a\x2d4ef9\x2d89e0\x2d67f4\x2db509e9e55928.mount: Deactivated successfully. Oct 29 04:54:46.921040 env[1306]: time="2025-10-29T04:54:46.920797977Z" level=info msg="TearDown network for sandbox \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\" successfully" Oct 29 04:54:46.921040 env[1306]: time="2025-10-29T04:54:46.921019909Z" level=info msg="StopPodSandbox for \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\" returns successfully" Oct 29 04:54:46.936599 kubelet[2172]: I1029 04:54:46.936481 2172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s5c5l" podStartSLOduration=58.936413785 podStartE2EDuration="58.936413785s" podCreationTimestamp="2025-10-29 04:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 04:54:46.901126809 +0000 UTC m=+61.811162803" watchObservedRunningTime="2025-10-29 04:54:46.936413785 +0000 UTC m=+61.846449780" Oct 29 04:54:46.938753 env[1306]: time="2025-10-29T04:54:46.938698554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-865b7496cf-28bh8,Uid:9c2b379e-441a-4610-b0bd-30a6fa391f82,Namespace:calico-system,Attempt:1,}" Oct 29 04:54:46.990600 env[1306]: 2025-10-29 04:54:46.773 [INFO][4314] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Oct 29 04:54:46.990600 env[1306]: 2025-10-29 04:54:46.773 [INFO][4314] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" iface="eth0" netns="/var/run/netns/cni-5773d799-1872-2646-9b26-d5f6dbaede6b" Oct 29 04:54:46.990600 env[1306]: 2025-10-29 04:54:46.774 [INFO][4314] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" iface="eth0" netns="/var/run/netns/cni-5773d799-1872-2646-9b26-d5f6dbaede6b" Oct 29 04:54:46.990600 env[1306]: 2025-10-29 04:54:46.774 [INFO][4314] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" iface="eth0" netns="/var/run/netns/cni-5773d799-1872-2646-9b26-d5f6dbaede6b" Oct 29 04:54:46.990600 env[1306]: 2025-10-29 04:54:46.774 [INFO][4314] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Oct 29 04:54:46.990600 env[1306]: 2025-10-29 04:54:46.774 [INFO][4314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Oct 29 04:54:46.990600 env[1306]: 2025-10-29 04:54:46.874 [INFO][4349] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" HandleID="k8s-pod-network.eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0" Oct 29 04:54:46.990600 env[1306]: 2025-10-29 04:54:46.875 [INFO][4349] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:46.990600 env[1306]: 2025-10-29 04:54:46.879 [INFO][4349] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:46.990600 env[1306]: 2025-10-29 04:54:46.907 [WARNING][4349] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" HandleID="k8s-pod-network.eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0" Oct 29 04:54:46.990600 env[1306]: 2025-10-29 04:54:46.907 [INFO][4349] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" HandleID="k8s-pod-network.eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0" Oct 29 04:54:46.990600 env[1306]: 2025-10-29 04:54:46.931 [INFO][4349] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:46.990600 env[1306]: 2025-10-29 04:54:46.977 [INFO][4314] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Oct 29 04:54:46.990600 env[1306]: time="2025-10-29T04:54:46.989177892Z" level=info msg="TearDown network for sandbox \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\" successfully" Oct 29 04:54:46.990600 env[1306]: time="2025-10-29T04:54:46.989258503Z" level=info msg="StopPodSandbox for \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\" returns successfully" Oct 29 04:54:46.989167 systemd[1]: run-netns-cni\x2d5773d799\x2d1872\x2d2646\x2d9b26\x2dd5f6dbaede6b.mount: Deactivated successfully. Oct 29 04:54:46.993199 env[1306]: time="2025-10-29T04:54:46.993127612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lslb8,Uid:b87dfa06-fb00-43d6-9e83-3b9e31aa23c5,Namespace:kube-system,Attempt:1,}" Oct 29 04:54:46.993000 audit[4378]: NETFILTER_CFG table=filter:116 family=2 entries=20 op=nft_register_rule pid=4378 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:46.993000 audit[4378]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffbc66b900 a2=0 a3=7fffbc66b8ec items=0 ppid=2278 pid=4378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:47.010809 kernel: audit: type=1325 audit(1761713686.993:429): table=filter:116 family=2 entries=20 op=nft_register_rule pid=4378 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:47.011162 kernel: audit: type=1300 audit(1761713686.993:429): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffbc66b900 a2=0 a3=7fffbc66b8ec items=0 ppid=2278 pid=4378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:47.011229 kernel: audit: type=1327 audit(1761713686.993:429): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:46.993000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:47.015774 kernel: audit: type=1325 audit(1761713687.011:430): table=nat:117 family=2 entries=14 op=nft_register_rule pid=4378 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:47.011000 audit[4378]: NETFILTER_CFG table=nat:117 family=2 entries=14 op=nft_register_rule pid=4378 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:47.011000 audit[4378]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffbc66b900 a2=0 a3=0 items=0 ppid=2278 pid=4378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:47.045567 kernel: audit: type=1300 audit(1761713687.011:430): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffbc66b900 a2=0 a3=0 items=0 ppid=2278 pid=4378 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:47.011000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:47.052518 kernel: audit: type=1327 audit(1761713687.011:430): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:47.093000 audit[4403]: NETFILTER_CFG table=filter:118 family=2 entries=17 op=nft_register_rule pid=4403 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:47.101121 kernel: audit: type=1325 audit(1761713687.093:431): table=filter:118 family=2 entries=17 op=nft_register_rule pid=4403 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:47.093000 audit[4403]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffee89369e0 a2=0 a3=7ffee89369cc items=0 ppid=2278 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:47.093000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:47.105000 audit[4403]: NETFILTER_CFG table=nat:119 family=2 entries=35 op=nft_register_chain pid=4403 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:47.105000 audit[4403]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffee89369e0 a2=0 a3=7ffee89369cc items=0 ppid=2278 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:47.105000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:47.221670 env[1306]: 2025-10-29 04:54:47.092 [WARNING][4372] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"4d534185-a9e7-4c27-807c-917c6d4b755f", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda", Pod:"goldmane-666569f655-4pd8r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.31.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali61b4fc9dc78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:54:47.221670 env[1306]: 2025-10-29 04:54:47.092 [INFO][4372] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" Oct 29 04:54:47.221670 env[1306]: 2025-10-29 04:54:47.092 [INFO][4372] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" iface="eth0" netns="" Oct 29 04:54:47.221670 env[1306]: 2025-10-29 04:54:47.092 [INFO][4372] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" Oct 29 04:54:47.221670 env[1306]: 2025-10-29 04:54:47.092 [INFO][4372] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" Oct 29 04:54:47.221670 env[1306]: 2025-10-29 04:54:47.189 [INFO][4405] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" HandleID="k8s-pod-network.a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" Workload="srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0" Oct 29 04:54:47.221670 env[1306]: 2025-10-29 04:54:47.192 [INFO][4405] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:47.221670 env[1306]: 2025-10-29 04:54:47.194 [INFO][4405] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:47.221670 env[1306]: 2025-10-29 04:54:47.211 [WARNING][4405] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" HandleID="k8s-pod-network.a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" Workload="srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0" Oct 29 04:54:47.221670 env[1306]: 2025-10-29 04:54:47.211 [INFO][4405] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" HandleID="k8s-pod-network.a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" Workload="srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0" Oct 29 04:54:47.221670 env[1306]: 2025-10-29 04:54:47.215 [INFO][4405] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:47.221670 env[1306]: 2025-10-29 04:54:47.218 [INFO][4372] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" Oct 29 04:54:47.222831 env[1306]: time="2025-10-29T04:54:47.222778786Z" level=info msg="TearDown network for sandbox \"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b\" successfully" Oct 29 04:54:47.223067 env[1306]: time="2025-10-29T04:54:47.223030740Z" level=info msg="StopPodSandbox for \"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b\" returns successfully" Oct 29 04:54:47.224010 env[1306]: time="2025-10-29T04:54:47.223969747Z" level=info msg="RemovePodSandbox for \"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b\"" Oct 29 04:54:47.224085 env[1306]: time="2025-10-29T04:54:47.224028483Z" level=info msg="Forcibly stopping sandbox \"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b\"" Oct 29 04:54:47.345970 systemd-networkd[1070]: calib1c6a1e78fb: Link UP Oct 29 04:54:47.355572 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 29 04:54:47.355719 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib1c6a1e78fb: link becomes ready Oct 29 04:54:47.355062 systemd-networkd[1070]: calib1c6a1e78fb: Gained carrier Oct 29 04:54:47.386189 env[1306]: 2025-10-29 04:54:47.166 [INFO][4393] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0 coredns-668d6bf9bc- kube-system b87dfa06-fb00-43d6-9e83-3b9e31aa23c5 1051 0 2025-10-29 04:53:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-xtjva.gb1.brightbox.com coredns-668d6bf9bc-lslb8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib1c6a1e78fb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596" Namespace="kube-system" Pod="coredns-668d6bf9bc-lslb8" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-" Oct 29 04:54:47.386189 env[1306]: 2025-10-29 04:54:47.166 [INFO][4393] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596" Namespace="kube-system" Pod="coredns-668d6bf9bc-lslb8" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0" Oct 29 04:54:47.386189 env[1306]: 2025-10-29 04:54:47.260 [INFO][4414] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596" HandleID="k8s-pod-network.026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0" Oct 29 04:54:47.386189 env[1306]: 2025-10-29 04:54:47.260 [INFO][4414] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596" HandleID="k8s-pod-network.026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000373f30), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-xtjva.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-lslb8", "timestamp":"2025-10-29 04:54:47.260244055 +0000 UTC"}, Hostname:"srv-xtjva.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 04:54:47.386189 env[1306]: 2025-10-29 04:54:47.260 [INFO][4414] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:47.386189 env[1306]: 2025-10-29 04:54:47.260 [INFO][4414] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:47.386189 env[1306]: 2025-10-29 04:54:47.261 [INFO][4414] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-xtjva.gb1.brightbox.com' Oct 29 04:54:47.386189 env[1306]: 2025-10-29 04:54:47.271 [INFO][4414] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:47.386189 env[1306]: 2025-10-29 04:54:47.279 [INFO][4414] ipam/ipam.go 394: Looking up existing affinities for host host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:47.386189 env[1306]: 2025-10-29 04:54:47.296 [INFO][4414] ipam/ipam.go 511: Trying affinity for 192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:47.386189 env[1306]: 2025-10-29 04:54:47.301 [INFO][4414] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:47.386189 env[1306]: 2025-10-29 04:54:47.304 [INFO][4414] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:47.386189 env[1306]: 2025-10-29 04:54:47.305 [INFO][4414] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:47.386189 env[1306]: 2025-10-29 04:54:47.307 [INFO][4414] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596 Oct 29 04:54:47.386189 env[1306]: 2025-10-29 04:54:47.316 [INFO][4414] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:47.386189 env[1306]: 2025-10-29 04:54:47.324 [INFO][4414] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.31.134/26] block=192.168.31.128/26 handle="k8s-pod-network.026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:47.386189 env[1306]: 2025-10-29 04:54:47.324 [INFO][4414] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.134/26] handle="k8s-pod-network.026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:47.386189 env[1306]: 2025-10-29 04:54:47.325 [INFO][4414] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:47.386189 env[1306]: 2025-10-29 04:54:47.325 [INFO][4414] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.31.134/26] IPv6=[] ContainerID="026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596" HandleID="k8s-pod-network.026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0" Oct 29 04:54:47.387782 env[1306]: 2025-10-29 04:54:47.332 [INFO][4393] cni-plugin/k8s.go 418: Populated endpoint ContainerID="026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596" Namespace="kube-system" Pod="coredns-668d6bf9bc-lslb8" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b87dfa06-fb00-43d6-9e83-3b9e31aa23c5", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-lslb8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1c6a1e78fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:54:47.387782 env[1306]: 2025-10-29 04:54:47.333 [INFO][4393] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.134/32] ContainerID="026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596" Namespace="kube-system" Pod="coredns-668d6bf9bc-lslb8" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0" Oct 29 04:54:47.387782 env[1306]: 2025-10-29 04:54:47.333 [INFO][4393] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1c6a1e78fb ContainerID="026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596" Namespace="kube-system" Pod="coredns-668d6bf9bc-lslb8" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0" Oct 29 04:54:47.387782 env[1306]: 2025-10-29 04:54:47.356 [INFO][4393] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596" Namespace="kube-system" Pod="coredns-668d6bf9bc-lslb8" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0" Oct 29 04:54:47.387782 env[1306]: 2025-10-29 04:54:47.357 [INFO][4393] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596" Namespace="kube-system" Pod="coredns-668d6bf9bc-lslb8" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b87dfa06-fb00-43d6-9e83-3b9e31aa23c5", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596", Pod:"coredns-668d6bf9bc-lslb8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1c6a1e78fb", MAC:"ca:ea:22:b3:7a:2f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:54:47.387782 env[1306]: 2025-10-29 04:54:47.376 [INFO][4393] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596" Namespace="kube-system" Pod="coredns-668d6bf9bc-lslb8" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0" Oct 29 04:54:47.452012 env[1306]: time="2025-10-29T04:54:47.451675413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 04:54:47.452012 env[1306]: time="2025-10-29T04:54:47.451958896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 04:54:47.452297 env[1306]: time="2025-10-29T04:54:47.452043685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 04:54:47.457413 env[1306]: time="2025-10-29T04:54:47.455251993Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596 pid=4468 runtime=io.containerd.runc.v2 Oct 29 04:54:47.477000 audit[4481]: NETFILTER_CFG table=filter:120 family=2 entries=44 op=nft_register_chain pid=4481 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 29 04:54:47.477000 audit[4481]: SYSCALL arch=c000003e syscall=46 success=yes exit=21516 a0=3 a1=7fffe2de8570 a2=0 a3=7fffe2de855c items=0 ppid=3576 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:47.477000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 29 04:54:47.485919 systemd-networkd[1070]: cali0127d6c02ad: Link UP Oct 29 04:54:47.496804 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0127d6c02ad: link becomes ready Oct 29 04:54:47.497884 systemd-networkd[1070]: cali0127d6c02ad: Gained carrier Oct 29 04:54:47.499807 env[1306]: time="2025-10-29T04:54:47.499755574Z" level=info msg="StopPodSandbox for \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\"" Oct 29 04:54:47.551039 env[1306]: 2025-10-29 04:54:47.196 [INFO][4380] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0 calico-kube-controllers-865b7496cf- calico-system 9c2b379e-441a-4610-b0bd-30a6fa391f82 1050 0 2025-10-29 04:54:07 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:865b7496cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-xtjva.gb1.brightbox.com calico-kube-controllers-865b7496cf-28bh8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0127d6c02ad [] [] }} ContainerID="41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90" Namespace="calico-system" Pod="calico-kube-controllers-865b7496cf-28bh8" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-" Oct 29 04:54:47.551039 env[1306]: 2025-10-29 04:54:47.196 [INFO][4380] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90" Namespace="calico-system" Pod="calico-kube-controllers-865b7496cf-28bh8" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0" Oct 29 04:54:47.551039 env[1306]: 2025-10-29 04:54:47.288 [INFO][4420] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90" HandleID="k8s-pod-network.41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0" Oct 29 04:54:47.551039 env[1306]: 2025-10-29 04:54:47.289 [INFO][4420] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90" HandleID="k8s-pod-network.41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-xtjva.gb1.brightbox.com", "pod":"calico-kube-controllers-865b7496cf-28bh8", "timestamp":"2025-10-29 04:54:47.288337867 +0000 UTC"}, Hostname:"srv-xtjva.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 04:54:47.551039 env[1306]: 2025-10-29 04:54:47.289 [INFO][4420] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:47.551039 env[1306]: 2025-10-29 04:54:47.325 [INFO][4420] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:47.551039 env[1306]: 2025-10-29 04:54:47.325 [INFO][4420] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-xtjva.gb1.brightbox.com' Oct 29 04:54:47.551039 env[1306]: 2025-10-29 04:54:47.383 [INFO][4420] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:47.551039 env[1306]: 2025-10-29 04:54:47.401 [INFO][4420] ipam/ipam.go 394: Looking up existing affinities for host host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:47.551039 env[1306]: 2025-10-29 04:54:47.414 [INFO][4420] ipam/ipam.go 511: Trying affinity for 192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:47.551039 env[1306]: 2025-10-29 04:54:47.418 [INFO][4420] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:47.551039 env[1306]: 2025-10-29 04:54:47.430 [INFO][4420] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:47.551039 env[1306]: 2025-10-29 04:54:47.430 [INFO][4420] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:47.551039 env[1306]: 2025-10-29 04:54:47.437 [INFO][4420] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90 Oct 29 04:54:47.551039 env[1306]: 2025-10-29 04:54:47.453 [INFO][4420] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:47.551039 env[1306]: 2025-10-29 04:54:47.462 [INFO][4420] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.31.135/26] block=192.168.31.128/26 handle="k8s-pod-network.41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:47.551039 env[1306]: 2025-10-29 04:54:47.462 [INFO][4420] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.135/26] handle="k8s-pod-network.41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:47.551039 env[1306]: 2025-10-29 04:54:47.462 [INFO][4420] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:47.551039 env[1306]: 2025-10-29 04:54:47.462 [INFO][4420] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.31.135/26] IPv6=[] ContainerID="41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90" HandleID="k8s-pod-network.41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0" Oct 29 04:54:47.552466 env[1306]: 2025-10-29 04:54:47.468 [INFO][4380] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90" Namespace="calico-system" Pod="calico-kube-controllers-865b7496cf-28bh8" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0", GenerateName:"calico-kube-controllers-865b7496cf-", Namespace:"calico-system", SelfLink:"", UID:"9c2b379e-441a-4610-b0bd-30a6fa391f82", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"865b7496cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-865b7496cf-28bh8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0127d6c02ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:54:47.552466 env[1306]: 2025-10-29 04:54:47.473 [INFO][4380] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.135/32] ContainerID="41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90" Namespace="calico-system" Pod="calico-kube-controllers-865b7496cf-28bh8" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0" Oct 29 04:54:47.552466 env[1306]: 2025-10-29 04:54:47.473 [INFO][4380] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0127d6c02ad ContainerID="41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90" Namespace="calico-system" Pod="calico-kube-controllers-865b7496cf-28bh8" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0" Oct 29 04:54:47.552466 env[1306]: 2025-10-29 04:54:47.502 [INFO][4380] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90" Namespace="calico-system" Pod="calico-kube-controllers-865b7496cf-28bh8" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0" Oct 29 04:54:47.552466 env[1306]: 2025-10-29 04:54:47.504 [INFO][4380] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90" Namespace="calico-system" Pod="calico-kube-controllers-865b7496cf-28bh8" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0", GenerateName:"calico-kube-controllers-865b7496cf-", Namespace:"calico-system", SelfLink:"", UID:"9c2b379e-441a-4610-b0bd-30a6fa391f82", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"865b7496cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90", Pod:"calico-kube-controllers-865b7496cf-28bh8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0127d6c02ad", MAC:"be:76:aa:b7:e8:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:54:47.552466 env[1306]: 2025-10-29 04:54:47.543 [INFO][4380] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90" Namespace="calico-system" Pod="calico-kube-controllers-865b7496cf-28bh8" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0" Oct 29 04:54:47.598000 audit[4528]: NETFILTER_CFG table=filter:121 family=2 entries=52 op=nft_register_chain pid=4528 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 29 04:54:47.598000 audit[4528]: SYSCALL arch=c000003e syscall=46 success=yes exit=24312 a0=3 a1=7ffc64dc7a80 a2=0 a3=7ffc64dc7a6c items=0 ppid=3576 pid=4528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:47.598000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 29 04:54:47.620862 env[1306]: time="2025-10-29T04:54:47.607067900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 04:54:47.620862 env[1306]: time="2025-10-29T04:54:47.607138152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 04:54:47.620862 env[1306]: time="2025-10-29T04:54:47.607156312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 04:54:47.620862 env[1306]: time="2025-10-29T04:54:47.607420220Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90 pid=4523 runtime=io.containerd.runc.v2 Oct 29 04:54:47.665705 env[1306]: 2025-10-29 04:54:47.410 [WARNING][4437] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"4d534185-a9e7-4c27-807c-917c6d4b755f", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 54, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"8232d3906192e649b76adbc082c6af04cb4c4de497bdbe09ed2aebd27dac7fda", Pod:"goldmane-666569f655-4pd8r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.31.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali61b4fc9dc78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:54:47.665705 env[1306]: 2025-10-29 04:54:47.410 [INFO][4437] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" Oct 29 04:54:47.665705 env[1306]: 2025-10-29 04:54:47.410 [INFO][4437] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" iface="eth0" netns="" Oct 29 04:54:47.665705 env[1306]: 2025-10-29 04:54:47.410 [INFO][4437] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" Oct 29 04:54:47.665705 env[1306]: 2025-10-29 04:54:47.410 [INFO][4437] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" Oct 29 04:54:47.665705 env[1306]: 2025-10-29 04:54:47.607 [INFO][4457] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" HandleID="k8s-pod-network.a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" Workload="srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0" Oct 29 04:54:47.665705 env[1306]: 2025-10-29 04:54:47.608 [INFO][4457] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:47.665705 env[1306]: 2025-10-29 04:54:47.608 [INFO][4457] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:47.665705 env[1306]: 2025-10-29 04:54:47.625 [WARNING][4457] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" HandleID="k8s-pod-network.a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" Workload="srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0" Oct 29 04:54:47.665705 env[1306]: 2025-10-29 04:54:47.625 [INFO][4457] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" HandleID="k8s-pod-network.a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" Workload="srv--xtjva.gb1.brightbox.com-k8s-goldmane--666569f655--4pd8r-eth0" Oct 29 04:54:47.665705 env[1306]: 2025-10-29 04:54:47.635 [INFO][4457] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:47.665705 env[1306]: 2025-10-29 04:54:47.645 [INFO][4437] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b" Oct 29 04:54:47.667182 env[1306]: time="2025-10-29T04:54:47.665779614Z" level=info msg="TearDown network for sandbox \"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b\" successfully" Oct 29 04:54:47.678028 env[1306]: time="2025-10-29T04:54:47.677975732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lslb8,Uid:b87dfa06-fb00-43d6-9e83-3b9e31aa23c5,Namespace:kube-system,Attempt:1,} returns sandbox id \"026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596\"" Oct 29 04:54:47.685362 env[1306]: time="2025-10-29T04:54:47.684397208Z" level=info msg="RemovePodSandbox \"a100f3f7e0216e3872408bc131f7f3db766426cc02355bb2a4d78491e2f1530b\" returns successfully" Oct 29 04:54:47.688423 env[1306]: time="2025-10-29T04:54:47.688385184Z" level=info msg="CreateContainer within sandbox \"026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 04:54:47.735216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount63490350.mount: Deactivated successfully. Oct 29 04:54:47.739264 env[1306]: time="2025-10-29T04:54:47.739191293Z" level=info msg="CreateContainer within sandbox \"026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2c6da010da2538784942b05432485c81be2aaf910cc9963b17354fcfe711b5ea\"" Oct 29 04:54:47.742787 env[1306]: time="2025-10-29T04:54:47.742730137Z" level=info msg="StartContainer for \"2c6da010da2538784942b05432485c81be2aaf910cc9963b17354fcfe711b5ea\"" Oct 29 04:54:47.830697 env[1306]: time="2025-10-29T04:54:47.830610217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-865b7496cf-28bh8,Uid:9c2b379e-441a-4610-b0bd-30a6fa391f82,Namespace:calico-system,Attempt:1,} returns sandbox id \"41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90\"" Oct 29 04:54:47.835785 env[1306]: time="2025-10-29T04:54:47.835742695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 04:54:47.923940 env[1306]: 2025-10-29 04:54:47.809 [INFO][4520] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Oct 29 04:54:47.923940 env[1306]: 2025-10-29 04:54:47.813 [INFO][4520] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" iface="eth0" netns="/var/run/netns/cni-6f7de803-412e-8c63-4719-9fc301492c11" Oct 29 04:54:47.923940 env[1306]: 2025-10-29 04:54:47.814 [INFO][4520] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" iface="eth0" netns="/var/run/netns/cni-6f7de803-412e-8c63-4719-9fc301492c11" Oct 29 04:54:47.923940 env[1306]: 2025-10-29 04:54:47.814 [INFO][4520] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" iface="eth0" netns="/var/run/netns/cni-6f7de803-412e-8c63-4719-9fc301492c11" Oct 29 04:54:47.923940 env[1306]: 2025-10-29 04:54:47.814 [INFO][4520] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Oct 29 04:54:47.923940 env[1306]: 2025-10-29 04:54:47.814 [INFO][4520] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Oct 29 04:54:47.923940 env[1306]: 2025-10-29 04:54:47.900 [INFO][4587] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" HandleID="k8s-pod-network.c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0" Oct 29 04:54:47.923940 env[1306]: 2025-10-29 04:54:47.900 [INFO][4587] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:47.923940 env[1306]: 2025-10-29 04:54:47.900 [INFO][4587] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:47.923940 env[1306]: 2025-10-29 04:54:47.912 [WARNING][4587] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" HandleID="k8s-pod-network.c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0" Oct 29 04:54:47.923940 env[1306]: 2025-10-29 04:54:47.912 [INFO][4587] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" HandleID="k8s-pod-network.c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0" Oct 29 04:54:47.923940 env[1306]: 2025-10-29 04:54:47.915 [INFO][4587] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:47.923940 env[1306]: 2025-10-29 04:54:47.918 [INFO][4520] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Oct 29 04:54:47.925465 env[1306]: time="2025-10-29T04:54:47.925399247Z" level=info msg="TearDown network for sandbox \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\" successfully" Oct 29 04:54:47.925608 env[1306]: time="2025-10-29T04:54:47.925574525Z" level=info msg="StopPodSandbox for \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\" returns successfully" Oct 29 04:54:47.927093 env[1306]: time="2025-10-29T04:54:47.927053580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-678d6449b5-8q748,Uid:c95faceb-3919-455c-bd6c-4a68d6375a6d,Namespace:calico-apiserver,Attempt:1,}" Oct 29 04:54:47.945988 env[1306]: time="2025-10-29T04:54:47.944848846Z" level=info msg="StartContainer for \"2c6da010da2538784942b05432485c81be2aaf910cc9963b17354fcfe711b5ea\" returns successfully" Oct 29 04:54:48.044857 systemd-networkd[1070]: calie75e69b7c7f: Gained IPv6LL Oct 29 04:54:48.157861 env[1306]: time="2025-10-29T04:54:48.157673906Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:54:48.158968 env[1306]: time="2025-10-29T04:54:48.158875128Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 04:54:48.160426 kubelet[2172]: E1029 04:54:48.159295 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 04:54:48.160426 kubelet[2172]: E1029 04:54:48.159468 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 04:54:48.160426 kubelet[2172]: E1029 04:54:48.159843 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qkdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-865b7496cf-28bh8_calico-system(9c2b379e-441a-4610-b0bd-30a6fa391f82): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 04:54:48.162034 kubelet[2172]: E1029 04:54:48.161934 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-865b7496cf-28bh8" podUID="9c2b379e-441a-4610-b0bd-30a6fa391f82" Oct 29 04:54:48.173988 systemd-networkd[1070]: cali1f2f8ab088e: Link UP Oct 29 04:54:48.181428 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1f2f8ab088e: link becomes ready Oct 29 04:54:48.180934 systemd-networkd[1070]: cali1f2f8ab088e: Gained carrier Oct 29 04:54:48.238512 env[1306]: 2025-10-29 04:54:48.062 [INFO][4613] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0 calico-apiserver-678d6449b5- calico-apiserver c95faceb-3919-455c-bd6c-4a68d6375a6d 1072 0 2025-10-29 04:54:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:678d6449b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-xtjva.gb1.brightbox.com calico-apiserver-678d6449b5-8q748 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1f2f8ab088e [] [] }} ContainerID="1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7" Namespace="calico-apiserver" Pod="calico-apiserver-678d6449b5-8q748" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-" Oct 29 04:54:48.238512 env[1306]: 2025-10-29 04:54:48.062 [INFO][4613] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7" Namespace="calico-apiserver" Pod="calico-apiserver-678d6449b5-8q748" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0" Oct 29 04:54:48.238512 env[1306]: 2025-10-29 04:54:48.104 [INFO][4624] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7" HandleID="k8s-pod-network.1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0" Oct 29 04:54:48.238512 env[1306]: 2025-10-29 04:54:48.104 [INFO][4624] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7" HandleID="k8s-pod-network.1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d15d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-xtjva.gb1.brightbox.com", "pod":"calico-apiserver-678d6449b5-8q748", "timestamp":"2025-10-29 04:54:48.104341769 +0000 UTC"}, Hostname:"srv-xtjva.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 29 04:54:48.238512 env[1306]: 2025-10-29 04:54:48.104 [INFO][4624] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:54:48.238512 env[1306]: 2025-10-29 04:54:48.105 [INFO][4624] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:54:48.238512 env[1306]: 2025-10-29 04:54:48.105 [INFO][4624] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-xtjva.gb1.brightbox.com' Oct 29 04:54:48.238512 env[1306]: 2025-10-29 04:54:48.118 [INFO][4624] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:48.238512 env[1306]: 2025-10-29 04:54:48.124 [INFO][4624] ipam/ipam.go 394: Looking up existing affinities for host host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:48.238512 env[1306]: 2025-10-29 04:54:48.131 [INFO][4624] ipam/ipam.go 511: Trying affinity for 192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:48.238512 env[1306]: 2025-10-29 04:54:48.134 [INFO][4624] ipam/ipam.go 158: Attempting to load block cidr=192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:48.238512 env[1306]: 2025-10-29 04:54:48.137 [INFO][4624] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.31.128/26 host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:48.238512 env[1306]: 2025-10-29 04:54:48.138 [INFO][4624] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.31.128/26 handle="k8s-pod-network.1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:48.238512 env[1306]: 2025-10-29 04:54:48.140 [INFO][4624] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7 Oct 29 04:54:48.238512 env[1306]: 2025-10-29 04:54:48.146 [INFO][4624] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.31.128/26 handle="k8s-pod-network.1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:48.238512 env[1306]: 2025-10-29 04:54:48.158 [INFO][4624] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.31.136/26] block=192.168.31.128/26 handle="k8s-pod-network.1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:48.238512 env[1306]: 2025-10-29 04:54:48.158 [INFO][4624] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.31.136/26] handle="k8s-pod-network.1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7" host="srv-xtjva.gb1.brightbox.com" Oct 29 04:54:48.238512 env[1306]: 2025-10-29 04:54:48.158 [INFO][4624] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:54:48.238512 env[1306]: 2025-10-29 04:54:48.162 [INFO][4624] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.31.136/26] IPv6=[] ContainerID="1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7" HandleID="k8s-pod-network.1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0" Oct 29 04:54:48.239892 env[1306]: 2025-10-29 04:54:48.167 [INFO][4613] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7" Namespace="calico-apiserver" Pod="calico-apiserver-678d6449b5-8q748" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0", GenerateName:"calico-apiserver-678d6449b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"c95faceb-3919-455c-bd6c-4a68d6375a6d", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 54, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"678d6449b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-678d6449b5-8q748", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f2f8ab088e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:54:48.239892 env[1306]: 2025-10-29 04:54:48.167 [INFO][4613] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.31.136/32] ContainerID="1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7" Namespace="calico-apiserver" Pod="calico-apiserver-678d6449b5-8q748" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0" Oct 29 04:54:48.239892 env[1306]: 2025-10-29 04:54:48.168 [INFO][4613] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f2f8ab088e ContainerID="1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7" Namespace="calico-apiserver" Pod="calico-apiserver-678d6449b5-8q748" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0" Oct 29 04:54:48.239892 env[1306]: 2025-10-29 04:54:48.184 [INFO][4613] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7" Namespace="calico-apiserver" Pod="calico-apiserver-678d6449b5-8q748" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0" Oct 29 04:54:48.239892 env[1306]: 2025-10-29 04:54:48.195 [INFO][4613] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7" Namespace="calico-apiserver" Pod="calico-apiserver-678d6449b5-8q748" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0", GenerateName:"calico-apiserver-678d6449b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"c95faceb-3919-455c-bd6c-4a68d6375a6d", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 54, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"678d6449b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7", Pod:"calico-apiserver-678d6449b5-8q748", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f2f8ab088e", MAC:"be:b4:c0:48:41:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:54:48.239892 env[1306]: 2025-10-29 04:54:48.216 [INFO][4613] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7" Namespace="calico-apiserver" Pod="calico-apiserver-678d6449b5-8q748" WorkloadEndpoint="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0" Oct 29 04:54:48.295004 env[1306]: time="2025-10-29T04:54:48.294824647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 04:54:48.295317 env[1306]: time="2025-10-29T04:54:48.295272897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 04:54:48.295561 env[1306]: time="2025-10-29T04:54:48.295515944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 04:54:48.296058 env[1306]: time="2025-10-29T04:54:48.295955228Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7 pid=4645 runtime=io.containerd.runc.v2 Oct 29 04:54:48.306000 audit[4653]: NETFILTER_CFG table=filter:122 family=2 entries=57 op=nft_register_chain pid=4653 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 29 04:54:48.306000 audit[4653]: SYSCALL arch=c000003e syscall=46 success=yes exit=27812 a0=3 a1=7ffd78282bf0 a2=0 a3=7ffd78282bdc items=0 ppid=3576 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:48.306000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 29 04:54:48.403979 env[1306]: time="2025-10-29T04:54:48.403891974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-678d6449b5-8q748,Uid:c95faceb-3919-455c-bd6c-4a68d6375a6d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7\"" Oct 29 04:54:48.409681 env[1306]: time="2025-10-29T04:54:48.409643404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 04:54:48.714854 systemd[1]: run-containerd-runc-k8s.io-2c6da010da2538784942b05432485c81be2aaf910cc9963b17354fcfe711b5ea-runc.sgpDhZ.mount: Deactivated successfully. Oct 29 04:54:48.715590 systemd[1]: run-netns-cni\x2d6f7de803\x2d412e\x2d8c63\x2d4719\x2d9fc301492c11.mount: Deactivated successfully. Oct 29 04:54:48.726337 env[1306]: time="2025-10-29T04:54:48.726246365Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:54:48.727834 env[1306]: time="2025-10-29T04:54:48.727745570Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 04:54:48.729034 kubelet[2172]: E1029 04:54:48.728162 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 04:54:48.729034 kubelet[2172]: E1029 04:54:48.728267 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 04:54:48.729034 kubelet[2172]: E1029 04:54:48.728739 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kcr87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-678d6449b5-8q748_calico-apiserver(c95faceb-3919-455c-bd6c-4a68d6375a6d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 04:54:48.730168 kubelet[2172]: E1029 04:54:48.730062 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-678d6449b5-8q748" podUID="c95faceb-3919-455c-bd6c-4a68d6375a6d" Oct 29 04:54:48.738636 systemd-networkd[1070]: calib1c6a1e78fb: Gained IPv6LL Oct 29 04:54:48.912186 kubelet[2172]: E1029 04:54:48.912132 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-865b7496cf-28bh8" podUID="9c2b379e-441a-4610-b0bd-30a6fa391f82" Oct 29 04:54:48.913473 kubelet[2172]: E1029 04:54:48.913428 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-678d6449b5-8q748" podUID="c95faceb-3919-455c-bd6c-4a68d6375a6d" Oct 29 04:54:48.941911 kubelet[2172]: I1029 04:54:48.941814 2172 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lslb8" podStartSLOduration=60.941773653 podStartE2EDuration="1m0.941773653s" podCreationTimestamp="2025-10-29 04:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 04:54:48.918310337 +0000 UTC m=+63.828346339" watchObservedRunningTime="2025-10-29 04:54:48.941773653 +0000 UTC m=+63.851809648" Oct 29 04:54:48.972000 audit[4682]: NETFILTER_CFG table=filter:123 family=2 entries=14 op=nft_register_rule pid=4682 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:48.972000 audit[4682]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcedb74860 a2=0 a3=7ffcedb7484c items=0 ppid=2278 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:48.972000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:48.989000 audit[4682]: NETFILTER_CFG table=nat:124 family=2 entries=44 op=nft_register_rule pid=4682 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:48.989000 audit[4682]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffcedb74860 a2=0 a3=7ffcedb7484c items=0 ppid=2278 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:48.989000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:49.033000 audit[4684]: NETFILTER_CFG table=filter:125 family=2 entries=14 op=nft_register_rule pid=4684 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:49.033000 audit[4684]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe340431a0 a2=0 a3=7ffe3404318c items=0 ppid=2278 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:49.033000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:49.059000 audit[4684]: NETFILTER_CFG table=nat:126 family=2 entries=56 op=nft_register_chain pid=4684 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:54:49.059000 audit[4684]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe340431a0 a2=0 a3=7ffe3404318c items=0 ppid=2278 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:54:49.059000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:54:49.313897 systemd-networkd[1070]: cali1f2f8ab088e: Gained IPv6LL Oct 29 04:54:49.569687 systemd-networkd[1070]: cali0127d6c02ad: Gained IPv6LL Oct 29 04:54:49.914545 kubelet[2172]: E1029 04:54:49.914478 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-678d6449b5-8q748" podUID="c95faceb-3919-455c-bd6c-4a68d6375a6d" Oct 29 04:54:50.485226 env[1306]: time="2025-10-29T04:54:50.485151579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 04:54:50.789909 env[1306]: time="2025-10-29T04:54:50.789706629Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:54:50.791323 env[1306]: time="2025-10-29T04:54:50.791228403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 04:54:50.791583 kubelet[2172]: E1029 04:54:50.791525 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 04:54:50.791699 kubelet[2172]: E1029 04:54:50.791597 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 04:54:50.791850 kubelet[2172]: E1029 04:54:50.791780 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1de7ade95d214bc29a765b1d29f494cd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sxv89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8647577b76-v9phq_calico-system(419b3e19-fef1-48f6-b46c-276ff1e0b621): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 04:54:50.794498 env[1306]: time="2025-10-29T04:54:50.794449569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 04:54:51.122367 env[1306]: time="2025-10-29T04:54:51.122148233Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:54:51.123755 env[1306]: time="2025-10-29T04:54:51.123661772Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 04:54:51.124096 kubelet[2172]: E1029 04:54:51.124043 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 04:54:51.124761 kubelet[2172]: E1029 04:54:51.124711 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 04:54:51.125111 kubelet[2172]: E1029 04:54:51.125042 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sxv89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8647577b76-v9phq_calico-system(419b3e19-fef1-48f6-b46c-276ff1e0b621): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 04:54:51.126651 kubelet[2172]: E1029 04:54:51.126592 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8647577b76-v9phq" podUID="419b3e19-fef1-48f6-b46c-276ff1e0b621" Oct 29 04:54:51.486447 env[1306]: time="2025-10-29T04:54:51.486347606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 04:54:51.804128 env[1306]: time="2025-10-29T04:54:51.803861459Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:54:51.805630 env[1306]: time="2025-10-29T04:54:51.805531820Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 04:54:51.806047 kubelet[2172]: E1029 04:54:51.805975 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 04:54:51.806171 kubelet[2172]: E1029 04:54:51.806058 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 04:54:51.806319 kubelet[2172]: E1029 04:54:51.806249 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r268s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4pd8r_calico-system(4d534185-a9e7-4c27-807c-917c6d4b755f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 04:54:51.807730 kubelet[2172]: E1029 04:54:51.807685 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4pd8r" podUID="4d534185-a9e7-4c27-807c-917c6d4b755f" Oct 29 04:54:53.485736 env[1306]: time="2025-10-29T04:54:53.485506760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 04:54:53.785598 env[1306]: time="2025-10-29T04:54:53.785122291Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:54:53.786623 env[1306]: time="2025-10-29T04:54:53.786526581Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 04:54:53.786969 kubelet[2172]: E1029 04:54:53.786891 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 04:54:53.787586 kubelet[2172]: E1029 04:54:53.787531 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 04:54:53.788135 kubelet[2172]: E1029 04:54:53.788047 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nm8fq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-678d6449b5-m8bcr_calico-apiserver(b4da2a97-feea-487c-8384-a94163380e6f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 04:54:53.790079 kubelet[2172]: E1029 04:54:53.790024 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-678d6449b5-m8bcr" podUID="b4da2a97-feea-487c-8384-a94163380e6f" Oct 29 04:54:55.486286 env[1306]: time="2025-10-29T04:54:55.485974797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 04:54:55.795166 env[1306]: time="2025-10-29T04:54:55.794974473Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:54:55.796856 env[1306]: time="2025-10-29T04:54:55.796757197Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 04:54:55.797197 kubelet[2172]: E1029 04:54:55.797130 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 04:54:55.797753 kubelet[2172]: E1029 04:54:55.797206 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 04:54:55.797753 kubelet[2172]: E1029 04:54:55.797402 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdjn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tptz2_calico-system(de4b152a-29bb-4b0c-a12c-2eda92dd0564): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 04:54:55.801279 env[1306]: time="2025-10-29T04:54:55.800960235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 04:54:56.104615 env[1306]: time="2025-10-29T04:54:56.104364737Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:54:56.106268 env[1306]: time="2025-10-29T04:54:56.106114387Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 04:54:56.108264 kubelet[2172]: E1029 04:54:56.106748 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 04:54:56.108264 kubelet[2172]: E1029 04:54:56.106818 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 04:54:56.108264 kubelet[2172]: E1029 04:54:56.106989 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdjn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tptz2_calico-system(de4b152a-29bb-4b0c-a12c-2eda92dd0564): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 04:54:56.109522 kubelet[2172]: E1029 04:54:56.109362 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tptz2" podUID="de4b152a-29bb-4b0c-a12c-2eda92dd0564" Oct 29 04:55:00.486671 env[1306]: time="2025-10-29T04:55:00.486529715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 04:55:00.855051 env[1306]: time="2025-10-29T04:55:00.854839387Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:55:00.856910 env[1306]: time="2025-10-29T04:55:00.856792125Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 04:55:00.857230 kubelet[2172]: E1029 04:55:00.857164 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 04:55:00.857734 kubelet[2172]: E1029 04:55:00.857239 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 04:55:00.858838 kubelet[2172]: E1029 04:55:00.858764 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qkdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-865b7496cf-28bh8_calico-system(9c2b379e-441a-4610-b0bd-30a6fa391f82): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 04:55:00.860049 kubelet[2172]: E1029 04:55:00.859992 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-865b7496cf-28bh8" podUID="9c2b379e-441a-4610-b0bd-30a6fa391f82" Oct 29 04:55:02.485996 env[1306]: time="2025-10-29T04:55:02.485865262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 04:55:02.798659 env[1306]: time="2025-10-29T04:55:02.798465499Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:55:02.799758 env[1306]: time="2025-10-29T04:55:02.799689106Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 04:55:02.800100 kubelet[2172]: E1029 04:55:02.800046 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 04:55:02.800620 kubelet[2172]: E1029 04:55:02.800118 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 04:55:02.800620 kubelet[2172]: E1029 04:55:02.800287 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kcr87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-678d6449b5-8q748_calico-apiserver(c95faceb-3919-455c-bd6c-4a68d6375a6d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 04:55:02.801992 kubelet[2172]: E1029 04:55:02.801920 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-678d6449b5-8q748" podUID="c95faceb-3919-455c-bd6c-4a68d6375a6d" Oct 29 04:55:03.486783 kubelet[2172]: E1029 04:55:03.486706 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8647577b76-v9phq" podUID="419b3e19-fef1-48f6-b46c-276ff1e0b621" Oct 29 04:55:04.485188 kubelet[2172]: E1029 04:55:04.485131 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4pd8r" podUID="4d534185-a9e7-4c27-807c-917c6d4b755f" Oct 29 04:55:08.100530 systemd[1]: run-containerd-runc-k8s.io-71d6bc5d021ddcc10d95b2af898e8a201f0dc4a4a5e05b9cde0366fccf7e5962-runc.sSMW71.mount: Deactivated successfully. Oct 29 04:55:08.485808 kubelet[2172]: E1029 04:55:08.485731 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-678d6449b5-m8bcr" podUID="b4da2a97-feea-487c-8384-a94163380e6f" Oct 29 04:55:09.374673 systemd[1]: Started sshd@7-10.230.24.246:22-147.75.109.163:42956.service. Oct 29 04:55:09.387233 kernel: kauditd_printk_skb: 26 callbacks suppressed Oct 29 04:55:09.387450 kernel: audit: type=1130 audit(1761713709.375:440): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.24.246:22-147.75.109.163:42956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:09.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.24.246:22-147.75.109.163:42956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:09.498185 kubelet[2172]: E1029 04:55:09.498070 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tptz2" podUID="de4b152a-29bb-4b0c-a12c-2eda92dd0564" Oct 29 04:55:09.657227 systemd[1]: Started sshd@8-10.230.24.246:22-80.94.95.115:50594.service. Oct 29 04:55:09.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.24.246:22-80.94.95.115:50594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:09.665016 kernel: audit: type=1130 audit(1761713709.658:441): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.24.246:22-80.94.95.115:50594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:10.366000 audit[4728]: USER_ACCT pid=4728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:10.377634 kernel: audit: type=1101 audit(1761713710.366:442): pid=4728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:10.377740 sshd[4728]: Accepted publickey for core from 147.75.109.163 port 42956 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 04:55:10.379417 sshd[4728]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 04:55:10.376000 audit[4728]: CRED_ACQ pid=4728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:10.386417 kernel: audit: type=1103 audit(1761713710.376:443): pid=4728 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:10.376000 audit[4728]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd95398770 a2=3 a3=0 items=0 ppid=1 pid=4728 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:10.397594 kernel: audit: type=1006 audit(1761713710.376:444): pid=4728 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Oct 29 04:55:10.397703 kernel: audit: type=1300 audit(1761713710.376:444): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd95398770 a2=3 a3=0 items=0 ppid=1 pid=4728 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:10.376000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 04:55:10.400259 kernel: audit: type=1327 audit(1761713710.376:444): proctitle=737368643A20636F7265205B707269765D Oct 29 04:55:10.418832 systemd[1]: Started session-8.scope. Oct 29 04:55:10.419642 systemd-logind[1290]: New session 8 of user core. Oct 29 04:55:10.430000 audit[4728]: USER_START pid=4728 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:10.441806 kernel: audit: type=1105 audit(1761713710.430:445): pid=4728 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:10.441000 audit[4733]: CRED_ACQ pid=4733 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:10.449400 kernel: audit: type=1103 audit(1761713710.441:446): pid=4733 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:11.647827 sshd[4728]: pam_unix(sshd:session): session closed for user core Oct 29 04:55:11.651000 audit[4728]: USER_END pid=4728 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:11.664147 kernel: audit: type=1106 audit(1761713711.651:447): pid=4728 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:11.651000 audit[4728]: CRED_DISP pid=4728 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:11.663971 systemd[1]: sshd@7-10.230.24.246:22-147.75.109.163:42956.service: Deactivated successfully. Oct 29 04:55:11.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.24.246:22-147.75.109.163:42956 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:11.667002 systemd[1]: session-8.scope: Deactivated successfully. Oct 29 04:55:11.667864 systemd-logind[1290]: Session 8 logged out. Waiting for processes to exit. Oct 29 04:55:11.669526 systemd-logind[1290]: Removed session 8. Oct 29 04:55:12.533549 sshd[4730]: Connection closed by authenticating user operator 80.94.95.115 port 50594 [preauth] Oct 29 04:55:12.533000 audit[4730]: USER_ERR pid=4730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/sbin/sshd" hostname=80.94.95.115 addr=80.94.95.115 terminal=ssh res=failed' Oct 29 04:55:12.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.24.246:22-80.94.95.115:50594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:12.535279 systemd[1]: sshd@8-10.230.24.246:22-80.94.95.115:50594.service: Deactivated successfully. Oct 29 04:55:14.485705 kubelet[2172]: E1029 04:55:14.485626 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-678d6449b5-8q748" podUID="c95faceb-3919-455c-bd6c-4a68d6375a6d" Oct 29 04:55:14.870495 systemd[1]: Started sshd@9-10.230.24.246:22-64.62.156.66:35330.service. Oct 29 04:55:14.889944 kernel: kauditd_printk_skb: 4 callbacks suppressed Oct 29 04:55:14.890203 kernel: audit: type=1130 audit(1761713714.871:452): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.24.246:22-64.62.156.66:35330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:14.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.24.246:22-64.62.156.66:35330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:14.913993 sshd[4747]: kex_exchange_identification: client sent invalid protocol identifier "GET / HTTP/1.1" Oct 29 04:55:14.913993 sshd[4747]: banner exchange: Connection from 64.62.156.66 port 35330: invalid format Oct 29 04:55:14.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.24.246:22-64.62.156.66:35330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:14.914923 systemd[1]: sshd@9-10.230.24.246:22-64.62.156.66:35330.service: Deactivated successfully. Oct 29 04:55:14.935414 kernel: audit: type=1131 audit(1761713714.914:453): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.24.246:22-64.62.156.66:35330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:15.487143 kubelet[2172]: E1029 04:55:15.486855 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-865b7496cf-28bh8" podUID="9c2b379e-441a-4610-b0bd-30a6fa391f82" Oct 29 04:55:16.487251 env[1306]: time="2025-10-29T04:55:16.487149149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 04:55:16.798421 systemd[1]: Started sshd@10-10.230.24.246:22-147.75.109.163:35330.service. Oct 29 04:55:16.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.24.246:22-147.75.109.163:35330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:16.809484 kernel: audit: type=1130 audit(1761713716.798:454): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.24.246:22-147.75.109.163:35330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:17.050483 env[1306]: time="2025-10-29T04:55:17.050234905Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:55:17.052257 env[1306]: time="2025-10-29T04:55:17.052170564Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 04:55:17.052662 kubelet[2172]: E1029 04:55:17.052582 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 04:55:17.053256 kubelet[2172]: E1029 04:55:17.052684 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 04:55:17.053256 kubelet[2172]: E1029 04:55:17.052965 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r268s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4pd8r_calico-system(4d534185-a9e7-4c27-807c-917c6d4b755f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 04:55:17.055150 kubelet[2172]: E1029 04:55:17.055084 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4pd8r" podUID="4d534185-a9e7-4c27-807c-917c6d4b755f" Oct 29 04:55:17.720000 audit[4750]: USER_ACCT pid=4750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:17.722650 sshd[4750]: Accepted publickey for core from 147.75.109.163 port 35330 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 04:55:17.728408 kernel: audit: type=1101 audit(1761713717.720:455): pid=4750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:17.728729 sshd[4750]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 04:55:17.727000 audit[4750]: CRED_ACQ pid=4750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:17.739510 kernel: audit: type=1103 audit(1761713717.727:456): pid=4750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:17.739706 kernel: audit: type=1006 audit(1761713717.727:457): pid=4750 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Oct 29 04:55:17.727000 audit[4750]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd00a1a8f0 a2=3 a3=0 items=0 ppid=1 pid=4750 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:17.746423 kernel: audit: type=1300 audit(1761713717.727:457): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd00a1a8f0 a2=3 a3=0 items=0 ppid=1 pid=4750 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:17.746587 kernel: audit: type=1327 audit(1761713717.727:457): proctitle=737368643A20636F7265205B707269765D Oct 29 04:55:17.727000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 04:55:17.753445 systemd-logind[1290]: New session 9 of user core. Oct 29 04:55:17.755517 systemd[1]: Started session-9.scope. Oct 29 04:55:17.766000 audit[4750]: USER_START pid=4750 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:17.777437 kernel: audit: type=1105 audit(1761713717.766:458): pid=4750 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:17.769000 audit[4753]: CRED_ACQ pid=4753 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:17.789417 kernel: audit: type=1103 audit(1761713717.769:459): pid=4753 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:18.486814 env[1306]: time="2025-10-29T04:55:18.486706329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 04:55:18.507756 sshd[4750]: pam_unix(sshd:session): session closed for user core Oct 29 04:55:18.509000 audit[4750]: USER_END pid=4750 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:18.509000 audit[4750]: CRED_DISP pid=4750 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:18.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.24.246:22-147.75.109.163:35330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:18.512829 systemd[1]: sshd@10-10.230.24.246:22-147.75.109.163:35330.service: Deactivated successfully. Oct 29 04:55:18.514116 systemd[1]: session-9.scope: Deactivated successfully. Oct 29 04:55:18.514676 systemd-logind[1290]: Session 9 logged out. Waiting for processes to exit. Oct 29 04:55:18.516224 systemd-logind[1290]: Removed session 9. Oct 29 04:55:18.801388 env[1306]: time="2025-10-29T04:55:18.801171660Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:55:18.803017 env[1306]: time="2025-10-29T04:55:18.802926586Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 04:55:18.803588 kubelet[2172]: E1029 04:55:18.803505 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 04:55:18.804182 kubelet[2172]: E1029 04:55:18.804139 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 04:55:18.804552 kubelet[2172]: E1029 04:55:18.804493 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1de7ade95d214bc29a765b1d29f494cd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sxv89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8647577b76-v9phq_calico-system(419b3e19-fef1-48f6-b46c-276ff1e0b621): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 04:55:18.807005 env[1306]: time="2025-10-29T04:55:18.806959879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 04:55:19.142259 env[1306]: time="2025-10-29T04:55:19.142064350Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:55:19.143961 env[1306]: time="2025-10-29T04:55:19.143896665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 04:55:19.144980 kubelet[2172]: E1029 04:55:19.144405 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 04:55:19.144980 kubelet[2172]: E1029 04:55:19.144509 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 04:55:19.145569 kubelet[2172]: E1029 04:55:19.145456 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sxv89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8647577b76-v9phq_calico-system(419b3e19-fef1-48f6-b46c-276ff1e0b621): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 04:55:19.146756 kubelet[2172]: E1029 04:55:19.146694 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8647577b76-v9phq" podUID="419b3e19-fef1-48f6-b46c-276ff1e0b621" Oct 29 04:55:19.485398 env[1306]: time="2025-10-29T04:55:19.485328023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 04:55:19.803019 env[1306]: time="2025-10-29T04:55:19.802812156Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:55:19.804889 env[1306]: time="2025-10-29T04:55:19.804829361Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 04:55:19.805361 kubelet[2172]: E1029 04:55:19.805303 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 04:55:19.808638 kubelet[2172]: E1029 04:55:19.805401 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 04:55:19.808919 kubelet[2172]: E1029 04:55:19.808842 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nm8fq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-678d6449b5-m8bcr_calico-apiserver(b4da2a97-feea-487c-8384-a94163380e6f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 04:55:19.810581 kubelet[2172]: E1029 04:55:19.810540 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-678d6449b5-m8bcr" podUID="b4da2a97-feea-487c-8384-a94163380e6f" Oct 29 04:55:23.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.24.246:22-147.75.109.163:34698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:23.652549 systemd[1]: Started sshd@11-10.230.24.246:22-147.75.109.163:34698.service. Oct 29 04:55:23.658191 kernel: kauditd_printk_skb: 3 callbacks suppressed Oct 29 04:55:23.658276 kernel: audit: type=1130 audit(1761713723.652:463): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.24.246:22-147.75.109.163:34698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:24.486761 env[1306]: time="2025-10-29T04:55:24.486667265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 04:55:24.565000 audit[4772]: USER_ACCT pid=4772 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:24.577996 kernel: audit: type=1101 audit(1761713724.565:464): pid=4772 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:24.578086 sshd[4772]: Accepted publickey for core from 147.75.109.163 port 34698 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 04:55:24.578470 sshd[4772]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 04:55:24.576000 audit[4772]: CRED_ACQ pid=4772 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:24.592195 kernel: audit: type=1103 audit(1761713724.576:465): pid=4772 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:24.592353 kernel: audit: type=1006 audit(1761713724.576:466): pid=4772 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Oct 29 04:55:24.576000 audit[4772]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe57c19f60 a2=3 a3=0 items=0 ppid=1 pid=4772 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:24.599559 kernel: audit: type=1300 audit(1761713724.576:466): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe57c19f60 a2=3 a3=0 items=0 ppid=1 pid=4772 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:24.576000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 04:55:24.603422 kernel: audit: type=1327 audit(1761713724.576:466): proctitle=737368643A20636F7265205B707269765D Oct 29 04:55:24.607140 systemd-logind[1290]: New session 10 of user core. Oct 29 04:55:24.609095 systemd[1]: Started session-10.scope. Oct 29 04:55:24.617000 audit[4772]: USER_START pid=4772 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:24.628223 kernel: audit: type=1105 audit(1761713724.617:467): pid=4772 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:24.626000 audit[4775]: CRED_ACQ pid=4775 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:24.635457 kernel: audit: type=1103 audit(1761713724.626:468): pid=4775 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:24.806562 env[1306]: time="2025-10-29T04:55:24.805442190Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:55:24.807913 env[1306]: time="2025-10-29T04:55:24.807850761Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 04:55:24.808499 kubelet[2172]: E1029 04:55:24.808336 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 04:55:24.809290 kubelet[2172]: E1029 04:55:24.809237 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 04:55:24.811323 kubelet[2172]: E1029 04:55:24.810853 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdjn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tptz2_calico-system(de4b152a-29bb-4b0c-a12c-2eda92dd0564): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 04:55:24.821202 env[1306]: time="2025-10-29T04:55:24.821117045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 04:55:25.141212 env[1306]: time="2025-10-29T04:55:25.140931080Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:55:25.148698 env[1306]: time="2025-10-29T04:55:25.148572542Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 04:55:25.149350 kubelet[2172]: E1029 04:55:25.149241 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 04:55:25.149523 kubelet[2172]: E1029 04:55:25.149411 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 04:55:25.150225 kubelet[2172]: E1029 04:55:25.149694 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdjn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tptz2_calico-system(de4b152a-29bb-4b0c-a12c-2eda92dd0564): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 04:55:25.151671 kubelet[2172]: E1029 04:55:25.151619 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tptz2" podUID="de4b152a-29bb-4b0c-a12c-2eda92dd0564" Oct 29 04:55:25.310819 sshd[4772]: pam_unix(sshd:session): session closed for user core Oct 29 04:55:25.311000 audit[4772]: USER_END pid=4772 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:25.314841 systemd[1]: sshd@11-10.230.24.246:22-147.75.109.163:34698.service: Deactivated successfully. Oct 29 04:55:25.316149 systemd[1]: session-10.scope: Deactivated successfully. Oct 29 04:55:25.318705 systemd-logind[1290]: Session 10 logged out. Waiting for processes to exit. Oct 29 04:55:25.320161 systemd-logind[1290]: Removed session 10. Oct 29 04:55:25.311000 audit[4772]: CRED_DISP pid=4772 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:25.327453 kernel: audit: type=1106 audit(1761713725.311:469): pid=4772 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:25.327559 kernel: audit: type=1104 audit(1761713725.311:470): pid=4772 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:25.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.24.246:22-147.75.109.163:34698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:25.457907 systemd[1]: Started sshd@12-10.230.24.246:22-147.75.109.163:34708.service. Oct 29 04:55:25.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.24.246:22-147.75.109.163:34708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:25.490425 env[1306]: time="2025-10-29T04:55:25.488782093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 04:55:25.802451 env[1306]: time="2025-10-29T04:55:25.802224853Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:55:25.803509 env[1306]: time="2025-10-29T04:55:25.803440639Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 04:55:25.803866 kubelet[2172]: E1029 04:55:25.803794 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 04:55:25.804002 kubelet[2172]: E1029 04:55:25.803882 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 04:55:25.804208 kubelet[2172]: E1029 04:55:25.804128 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kcr87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-678d6449b5-8q748_calico-apiserver(c95faceb-3919-455c-bd6c-4a68d6375a6d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 04:55:25.806029 kubelet[2172]: E1029 04:55:25.805957 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-678d6449b5-8q748" podUID="c95faceb-3919-455c-bd6c-4a68d6375a6d" Oct 29 04:55:26.371000 audit[4785]: USER_ACCT pid=4785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:26.373564 sshd[4785]: Accepted publickey for core from 147.75.109.163 port 34708 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 04:55:26.373000 audit[4785]: CRED_ACQ pid=4785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:26.373000 audit[4785]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3f17b870 a2=3 a3=0 items=0 ppid=1 pid=4785 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:26.373000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 04:55:26.375675 sshd[4785]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 04:55:26.384238 systemd[1]: Started session-11.scope. Oct 29 04:55:26.384869 systemd-logind[1290]: New session 11 of user core. Oct 29 04:55:26.395000 audit[4785]: USER_START pid=4785 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:26.398000 audit[4788]: CRED_ACQ pid=4788 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:27.193105 sshd[4785]: pam_unix(sshd:session): session closed for user core Oct 29 04:55:27.193000 audit[4785]: USER_END pid=4785 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:27.193000 audit[4785]: CRED_DISP pid=4785 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:27.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.24.246:22-147.75.109.163:34708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:27.196847 systemd[1]: sshd@12-10.230.24.246:22-147.75.109.163:34708.service: Deactivated successfully. Oct 29 04:55:27.198065 systemd[1]: session-11.scope: Deactivated successfully. Oct 29 04:55:27.201926 systemd-logind[1290]: Session 11 logged out. Waiting for processes to exit. Oct 29 04:55:27.205189 systemd-logind[1290]: Removed session 11. Oct 29 04:55:27.342275 systemd[1]: Started sshd@13-10.230.24.246:22-147.75.109.163:34710.service. Oct 29 04:55:27.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.24.246:22-147.75.109.163:34710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:27.489740 kubelet[2172]: E1029 04:55:27.489104 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4pd8r" podUID="4d534185-a9e7-4c27-807c-917c6d4b755f" Oct 29 04:55:28.251000 audit[4796]: USER_ACCT pid=4796 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:28.253585 sshd[4796]: Accepted publickey for core from 147.75.109.163 port 34710 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 04:55:28.253000 audit[4796]: CRED_ACQ pid=4796 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:28.253000 audit[4796]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec7053670 a2=3 a3=0 items=0 ppid=1 pid=4796 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:28.253000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 04:55:28.255850 sshd[4796]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 04:55:28.263363 systemd-logind[1290]: New session 12 of user core. Oct 29 04:55:28.263903 systemd[1]: Started session-12.scope. Oct 29 04:55:28.279000 audit[4796]: USER_START pid=4796 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:28.282000 audit[4799]: CRED_ACQ pid=4799 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:28.999445 sshd[4796]: pam_unix(sshd:session): session closed for user core Oct 29 04:55:29.008945 kernel: kauditd_printk_skb: 20 callbacks suppressed Oct 29 04:55:29.009165 kernel: audit: type=1106 audit(1761713728.999:487): pid=4796 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:28.999000 audit[4796]: USER_END pid=4796 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:29.014851 systemd[1]: sshd@13-10.230.24.246:22-147.75.109.163:34710.service: Deactivated successfully. Oct 29 04:55:29.018366 systemd[1]: session-12.scope: Deactivated successfully. Oct 29 04:55:29.018814 systemd-logind[1290]: Session 12 logged out. Waiting for processes to exit. Oct 29 04:55:28.999000 audit[4796]: CRED_DISP pid=4796 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:29.026418 kernel: audit: type=1104 audit(1761713728.999:488): pid=4796 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:29.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.24.246:22-147.75.109.163:34710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:29.034563 systemd-logind[1290]: Removed session 12. Oct 29 04:55:29.035406 kernel: audit: type=1131 audit(1761713729.014:489): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.24.246:22-147.75.109.163:34710 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:30.486350 env[1306]: time="2025-10-29T04:55:30.486043118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 29 04:55:30.797004 env[1306]: time="2025-10-29T04:55:30.796789534Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:55:30.798394 env[1306]: time="2025-10-29T04:55:30.798305952Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 29 04:55:30.798742 kubelet[2172]: E1029 04:55:30.798668 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 04:55:30.799473 kubelet[2172]: E1029 04:55:30.799416 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 29 04:55:30.800521 kubelet[2172]: E1029 04:55:30.800417 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qkdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-865b7496cf-28bh8_calico-system(9c2b379e-441a-4610-b0bd-30a6fa391f82): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 29 04:55:30.802720 kubelet[2172]: E1029 04:55:30.802591 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-865b7496cf-28bh8" podUID="9c2b379e-441a-4610-b0bd-30a6fa391f82" Oct 29 04:55:32.485220 kubelet[2172]: E1029 04:55:32.485164 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-678d6449b5-m8bcr" podUID="b4da2a97-feea-487c-8384-a94163380e6f" Oct 29 04:55:33.486160 kubelet[2172]: E1029 04:55:33.485863 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8647577b76-v9phq" podUID="419b3e19-fef1-48f6-b46c-276ff1e0b621" Oct 29 04:55:34.151940 systemd[1]: Started sshd@14-10.230.24.246:22-147.75.109.163:42920.service. Oct 29 04:55:34.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.24.246:22-147.75.109.163:42920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:34.162407 kernel: audit: type=1130 audit(1761713734.151:490): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.24.246:22-147.75.109.163:42920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:35.081000 audit[4818]: USER_ACCT pid=4818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:35.090344 sshd[4818]: Accepted publickey for core from 147.75.109.163 port 42920 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 04:55:35.097960 kernel: audit: type=1101 audit(1761713735.081:491): pid=4818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:35.098029 kernel: audit: type=1103 audit(1761713735.088:492): pid=4818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:35.088000 audit[4818]: CRED_ACQ pid=4818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:35.090957 sshd[4818]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 04:55:35.102881 kernel: audit: type=1006 audit(1761713735.088:493): pid=4818 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Oct 29 04:55:35.088000 audit[4818]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff1ed1cbc0 a2=3 a3=0 items=0 ppid=1 pid=4818 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:35.113413 kernel: audit: type=1300 audit(1761713735.088:493): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff1ed1cbc0 a2=3 a3=0 items=0 ppid=1 pid=4818 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:35.088000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 04:55:35.117410 kernel: audit: type=1327 audit(1761713735.088:493): proctitle=737368643A20636F7265205B707269765D Oct 29 04:55:35.121991 systemd-logind[1290]: New session 13 of user core. Oct 29 04:55:35.123979 systemd[1]: Started session-13.scope. Oct 29 04:55:35.132000 audit[4818]: USER_START pid=4818 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:35.141525 kernel: audit: type=1105 audit(1761713735.132:494): pid=4818 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:35.142647 kernel: audit: type=1103 audit(1761713735.140:495): pid=4821 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:35.140000 audit[4821]: CRED_ACQ pid=4821 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:35.847969 sshd[4818]: pam_unix(sshd:session): session closed for user core Oct 29 04:55:35.848000 audit[4818]: USER_END pid=4818 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:35.862421 kernel: audit: type=1106 audit(1761713735.848:496): pid=4818 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:35.849000 audit[4818]: CRED_DISP pid=4818 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:35.863735 systemd[1]: sshd@14-10.230.24.246:22-147.75.109.163:42920.service: Deactivated successfully. Oct 29 04:55:35.870498 kernel: audit: type=1104 audit(1761713735.849:497): pid=4818 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:35.871200 systemd-logind[1290]: Session 13 logged out. Waiting for processes to exit. Oct 29 04:55:35.871205 systemd[1]: session-13.scope: Deactivated successfully. Oct 29 04:55:35.873348 systemd-logind[1290]: Removed session 13. Oct 29 04:55:35.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.24.246:22-147.75.109.163:42920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:38.487646 kubelet[2172]: E1029 04:55:38.487537 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tptz2" podUID="de4b152a-29bb-4b0c-a12c-2eda92dd0564" Oct 29 04:55:39.490126 kubelet[2172]: E1029 04:55:39.489949 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4pd8r" podUID="4d534185-a9e7-4c27-807c-917c6d4b755f" Oct 29 04:55:40.485288 kubelet[2172]: E1029 04:55:40.485214 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-678d6449b5-8q748" podUID="c95faceb-3919-455c-bd6c-4a68d6375a6d" Oct 29 04:55:40.995277 systemd[1]: Started sshd@15-10.230.24.246:22-147.75.109.163:57416.service. Oct 29 04:55:40.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.24.246:22-147.75.109.163:57416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:41.002952 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 29 04:55:41.003094 kernel: audit: type=1130 audit(1761713740.995:499): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.24.246:22-147.75.109.163:57416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:41.907000 audit[4851]: USER_ACCT pid=4851 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:41.908989 sshd[4851]: Accepted publickey for core from 147.75.109.163 port 57416 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 04:55:41.915413 kernel: audit: type=1101 audit(1761713741.907:500): pid=4851 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:41.915000 audit[4851]: CRED_ACQ pid=4851 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:41.917723 sshd[4851]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 04:55:41.927921 kernel: audit: type=1103 audit(1761713741.915:501): pid=4851 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:41.928117 kernel: audit: type=1006 audit(1761713741.915:502): pid=4851 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Oct 29 04:55:41.915000 audit[4851]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc130cc00 a2=3 a3=0 items=0 ppid=1 pid=4851 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:41.938445 kernel: audit: type=1300 audit(1761713741.915:502): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc130cc00 a2=3 a3=0 items=0 ppid=1 pid=4851 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:41.915000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 04:55:41.941552 kernel: audit: type=1327 audit(1761713741.915:502): proctitle=737368643A20636F7265205B707269765D Oct 29 04:55:41.947673 systemd-logind[1290]: New session 14 of user core. Oct 29 04:55:41.947715 systemd[1]: Started session-14.scope. Oct 29 04:55:41.958000 audit[4851]: USER_START pid=4851 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:41.969426 kernel: audit: type=1105 audit(1761713741.958:503): pid=4851 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:41.968000 audit[4854]: CRED_ACQ pid=4854 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:41.976453 kernel: audit: type=1103 audit(1761713741.968:504): pid=4854 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:42.648152 sshd[4851]: pam_unix(sshd:session): session closed for user core Oct 29 04:55:42.648000 audit[4851]: USER_END pid=4851 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:42.667621 kernel: audit: type=1106 audit(1761713742.648:505): pid=4851 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:42.667709 kernel: audit: type=1104 audit(1761713742.648:506): pid=4851 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:42.648000 audit[4851]: CRED_DISP pid=4851 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:42.656111 systemd[1]: sshd@15-10.230.24.246:22-147.75.109.163:57416.service: Deactivated successfully. Oct 29 04:55:42.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.24.246:22-147.75.109.163:57416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:42.657751 systemd[1]: session-14.scope: Deactivated successfully. Oct 29 04:55:42.666882 systemd-logind[1290]: Session 14 logged out. Waiting for processes to exit. Oct 29 04:55:42.668869 systemd-logind[1290]: Removed session 14. Oct 29 04:55:46.486799 kubelet[2172]: E1029 04:55:46.486676 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8647577b76-v9phq" podUID="419b3e19-fef1-48f6-b46c-276ff1e0b621" Oct 29 04:55:46.487902 kubelet[2172]: E1029 04:55:46.487830 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-865b7496cf-28bh8" podUID="9c2b379e-441a-4610-b0bd-30a6fa391f82" Oct 29 04:55:46.488296 kubelet[2172]: E1029 04:55:46.487965 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-678d6449b5-m8bcr" podUID="b4da2a97-feea-487c-8384-a94163380e6f" Oct 29 04:55:47.693243 env[1306]: time="2025-10-29T04:55:47.692422120Z" level=info msg="StopPodSandbox for \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\"" Oct 29 04:55:47.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.24.246:22-147.75.109.163:57428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:47.801851 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 29 04:55:47.801951 kernel: audit: type=1130 audit(1761713747.796:508): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.24.246:22-147.75.109.163:57428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:47.797481 systemd[1]: Started sshd@16-10.230.24.246:22-147.75.109.163:57428.service. Oct 29 04:55:47.964968 env[1306]: 2025-10-29 04:55:47.817 [WARNING][4875] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0", GenerateName:"calico-kube-controllers-865b7496cf-", Namespace:"calico-system", SelfLink:"", UID:"9c2b379e-441a-4610-b0bd-30a6fa391f82", ResourceVersion:"1450", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"865b7496cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90", Pod:"calico-kube-controllers-865b7496cf-28bh8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0127d6c02ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:55:47.964968 env[1306]: 2025-10-29 04:55:47.820 [INFO][4875] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Oct 29 04:55:47.964968 env[1306]: 2025-10-29 04:55:47.820 [INFO][4875] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" iface="eth0" netns="" Oct 29 04:55:47.964968 env[1306]: 2025-10-29 04:55:47.820 [INFO][4875] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Oct 29 04:55:47.964968 env[1306]: 2025-10-29 04:55:47.820 [INFO][4875] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Oct 29 04:55:47.964968 env[1306]: 2025-10-29 04:55:47.942 [INFO][4883] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" HandleID="k8s-pod-network.52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0" Oct 29 04:55:47.964968 env[1306]: 2025-10-29 04:55:47.943 [INFO][4883] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:55:47.964968 env[1306]: 2025-10-29 04:55:47.943 [INFO][4883] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:55:47.964968 env[1306]: 2025-10-29 04:55:47.955 [WARNING][4883] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" HandleID="k8s-pod-network.52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0" Oct 29 04:55:47.964968 env[1306]: 2025-10-29 04:55:47.956 [INFO][4883] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" HandleID="k8s-pod-network.52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0" Oct 29 04:55:47.964968 env[1306]: 2025-10-29 04:55:47.958 [INFO][4883] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:55:47.964968 env[1306]: 2025-10-29 04:55:47.961 [INFO][4875] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Oct 29 04:55:47.966497 env[1306]: time="2025-10-29T04:55:47.965562605Z" level=info msg="TearDown network for sandbox \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\" successfully" Oct 29 04:55:47.966497 env[1306]: time="2025-10-29T04:55:47.965626258Z" level=info msg="StopPodSandbox for \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\" returns successfully" Oct 29 04:55:47.974533 env[1306]: time="2025-10-29T04:55:47.974491697Z" level=info msg="RemovePodSandbox for \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\"" Oct 29 04:55:47.974990 env[1306]: time="2025-10-29T04:55:47.974888786Z" level=info msg="Forcibly stopping sandbox \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\"" Oct 29 04:55:48.089150 env[1306]: 2025-10-29 04:55:48.037 [WARNING][4900] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0", GenerateName:"calico-kube-controllers-865b7496cf-", Namespace:"calico-system", SelfLink:"", UID:"9c2b379e-441a-4610-b0bd-30a6fa391f82", ResourceVersion:"1450", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"865b7496cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"41a8798183adca961754f9de2135e56b50dc3a47ba5ff4c8e1b8731626c66c90", Pod:"calico-kube-controllers-865b7496cf-28bh8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.31.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0127d6c02ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:55:48.089150 env[1306]: 2025-10-29 04:55:48.037 [INFO][4900] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Oct 29 04:55:48.089150 env[1306]: 2025-10-29 04:55:48.037 [INFO][4900] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" iface="eth0" netns="" Oct 29 04:55:48.089150 env[1306]: 2025-10-29 04:55:48.037 [INFO][4900] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Oct 29 04:55:48.089150 env[1306]: 2025-10-29 04:55:48.038 [INFO][4900] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Oct 29 04:55:48.089150 env[1306]: 2025-10-29 04:55:48.070 [INFO][4907] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" HandleID="k8s-pod-network.52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0" Oct 29 04:55:48.089150 env[1306]: 2025-10-29 04:55:48.070 [INFO][4907] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:55:48.089150 env[1306]: 2025-10-29 04:55:48.070 [INFO][4907] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:55:48.089150 env[1306]: 2025-10-29 04:55:48.081 [WARNING][4907] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" HandleID="k8s-pod-network.52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0" Oct 29 04:55:48.089150 env[1306]: 2025-10-29 04:55:48.081 [INFO][4907] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" HandleID="k8s-pod-network.52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--kube--controllers--865b7496cf--28bh8-eth0" Oct 29 04:55:48.089150 env[1306]: 2025-10-29 04:55:48.083 [INFO][4907] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:55:48.089150 env[1306]: 2025-10-29 04:55:48.086 [INFO][4900] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6" Oct 29 04:55:48.091917 env[1306]: time="2025-10-29T04:55:48.090530015Z" level=info msg="TearDown network for sandbox \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\" successfully" Oct 29 04:55:48.096842 env[1306]: time="2025-10-29T04:55:48.096803732Z" level=info msg="RemovePodSandbox \"52470541d5da60bf09103e620e778b1c931b8872bb5b0353caed82212d4794f6\" returns successfully" Oct 29 04:55:48.097627 env[1306]: time="2025-10-29T04:55:48.097571379Z" level=info msg="StopPodSandbox for \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\"" Oct 29 04:55:48.229752 env[1306]: 2025-10-29 04:55:48.167 [WARNING][4922] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0", GenerateName:"calico-apiserver-678d6449b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"c95faceb-3919-455c-bd6c-4a68d6375a6d", ResourceVersion:"1424", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 54, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"678d6449b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7", Pod:"calico-apiserver-678d6449b5-8q748", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f2f8ab088e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:55:48.229752 env[1306]: 2025-10-29 04:55:48.167 [INFO][4922] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Oct 29 04:55:48.229752 env[1306]: 2025-10-29 04:55:48.167 [INFO][4922] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" iface="eth0" netns="" Oct 29 04:55:48.229752 env[1306]: 2025-10-29 04:55:48.167 [INFO][4922] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Oct 29 04:55:48.229752 env[1306]: 2025-10-29 04:55:48.167 [INFO][4922] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Oct 29 04:55:48.229752 env[1306]: 2025-10-29 04:55:48.211 [INFO][4929] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" HandleID="k8s-pod-network.c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0" Oct 29 04:55:48.229752 env[1306]: 2025-10-29 04:55:48.212 [INFO][4929] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:55:48.229752 env[1306]: 2025-10-29 04:55:48.212 [INFO][4929] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:55:48.229752 env[1306]: 2025-10-29 04:55:48.222 [WARNING][4929] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" HandleID="k8s-pod-network.c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0" Oct 29 04:55:48.229752 env[1306]: 2025-10-29 04:55:48.222 [INFO][4929] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" HandleID="k8s-pod-network.c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0" Oct 29 04:55:48.229752 env[1306]: 2025-10-29 04:55:48.224 [INFO][4929] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:55:48.229752 env[1306]: 2025-10-29 04:55:48.226 [INFO][4922] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Oct 29 04:55:48.232283 env[1306]: time="2025-10-29T04:55:48.229725489Z" level=info msg="TearDown network for sandbox \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\" successfully" Oct 29 04:55:48.232283 env[1306]: time="2025-10-29T04:55:48.231067951Z" level=info msg="StopPodSandbox for \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\" returns successfully" Oct 29 04:55:48.232617 env[1306]: time="2025-10-29T04:55:48.232570683Z" level=info msg="RemovePodSandbox for \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\"" Oct 29 04:55:48.232787 env[1306]: time="2025-10-29T04:55:48.232728448Z" level=info msg="Forcibly stopping sandbox \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\"" Oct 29 04:55:48.387423 env[1306]: 2025-10-29 04:55:48.328 [WARNING][4943] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0", GenerateName:"calico-apiserver-678d6449b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"c95faceb-3919-455c-bd6c-4a68d6375a6d", ResourceVersion:"1424", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 54, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"678d6449b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"1f209eafcd39496bc08c52f801b25976b94c33b8e90ea00a8992b3d934c0dcd7", Pod:"calico-apiserver-678d6449b5-8q748", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.31.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1f2f8ab088e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:55:48.387423 env[1306]: 2025-10-29 04:55:48.328 [INFO][4943] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Oct 29 04:55:48.387423 env[1306]: 2025-10-29 04:55:48.328 [INFO][4943] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" iface="eth0" netns="" Oct 29 04:55:48.387423 env[1306]: 2025-10-29 04:55:48.328 [INFO][4943] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Oct 29 04:55:48.387423 env[1306]: 2025-10-29 04:55:48.328 [INFO][4943] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Oct 29 04:55:48.387423 env[1306]: 2025-10-29 04:55:48.369 [INFO][4951] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" HandleID="k8s-pod-network.c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0" Oct 29 04:55:48.387423 env[1306]: 2025-10-29 04:55:48.370 [INFO][4951] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:55:48.387423 env[1306]: 2025-10-29 04:55:48.370 [INFO][4951] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:55:48.387423 env[1306]: 2025-10-29 04:55:48.380 [WARNING][4951] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" HandleID="k8s-pod-network.c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0" Oct 29 04:55:48.387423 env[1306]: 2025-10-29 04:55:48.380 [INFO][4951] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" HandleID="k8s-pod-network.c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Workload="srv--xtjva.gb1.brightbox.com-k8s-calico--apiserver--678d6449b5--8q748-eth0" Oct 29 04:55:48.387423 env[1306]: 2025-10-29 04:55:48.382 [INFO][4951] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:55:48.387423 env[1306]: 2025-10-29 04:55:48.385 [INFO][4943] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4" Oct 29 04:55:48.388419 env[1306]: time="2025-10-29T04:55:48.387449031Z" level=info msg="TearDown network for sandbox \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\" successfully" Oct 29 04:55:48.391598 env[1306]: time="2025-10-29T04:55:48.391540874Z" level=info msg="RemovePodSandbox \"c7b1e7ad5bafd6a57d18590168277df25c81146cfedddeccf4b7b6bc06a53ff4\" returns successfully" Oct 29 04:55:48.392442 env[1306]: time="2025-10-29T04:55:48.392404680Z" level=info msg="StopPodSandbox for \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\"" Oct 29 04:55:48.519197 env[1306]: 2025-10-29 04:55:48.457 [WARNING][4965] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b87dfa06-fb00-43d6-9e83-3b9e31aa23c5", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596", Pod:"coredns-668d6bf9bc-lslb8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1c6a1e78fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:55:48.519197 env[1306]: 2025-10-29 04:55:48.457 [INFO][4965] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Oct 29 04:55:48.519197 env[1306]: 2025-10-29 04:55:48.457 [INFO][4965] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" iface="eth0" netns="" Oct 29 04:55:48.519197 env[1306]: 2025-10-29 04:55:48.458 [INFO][4965] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Oct 29 04:55:48.519197 env[1306]: 2025-10-29 04:55:48.458 [INFO][4965] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Oct 29 04:55:48.519197 env[1306]: 2025-10-29 04:55:48.492 [INFO][4972] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" HandleID="k8s-pod-network.eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0" Oct 29 04:55:48.519197 env[1306]: 2025-10-29 04:55:48.493 [INFO][4972] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:55:48.519197 env[1306]: 2025-10-29 04:55:48.493 [INFO][4972] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:55:48.519197 env[1306]: 2025-10-29 04:55:48.505 [WARNING][4972] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" HandleID="k8s-pod-network.eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0" Oct 29 04:55:48.519197 env[1306]: 2025-10-29 04:55:48.505 [INFO][4972] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" HandleID="k8s-pod-network.eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0" Oct 29 04:55:48.519197 env[1306]: 2025-10-29 04:55:48.515 [INFO][4972] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:55:48.519197 env[1306]: 2025-10-29 04:55:48.517 [INFO][4965] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Oct 29 04:55:48.521522 env[1306]: time="2025-10-29T04:55:48.520746930Z" level=info msg="TearDown network for sandbox \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\" successfully" Oct 29 04:55:48.521522 env[1306]: time="2025-10-29T04:55:48.520811203Z" level=info msg="StopPodSandbox for \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\" returns successfully" Oct 29 04:55:48.522389 env[1306]: time="2025-10-29T04:55:48.522339095Z" level=info msg="RemovePodSandbox for \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\"" Oct 29 04:55:48.522711 env[1306]: time="2025-10-29T04:55:48.522644368Z" level=info msg="Forcibly stopping sandbox \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\"" Oct 29 04:55:48.629466 env[1306]: 2025-10-29 04:55:48.579 [WARNING][4986] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b87dfa06-fb00-43d6-9e83-3b9e31aa23c5", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"026d5ab0ffa264be7c7615ffc2acfdab2dd0fd41e1615f78214c58efa46a8596", Pod:"coredns-668d6bf9bc-lslb8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1c6a1e78fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:55:48.629466 env[1306]: 2025-10-29 04:55:48.579 [INFO][4986] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Oct 29 04:55:48.629466 env[1306]: 2025-10-29 04:55:48.580 [INFO][4986] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" iface="eth0" netns="" Oct 29 04:55:48.629466 env[1306]: 2025-10-29 04:55:48.580 [INFO][4986] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Oct 29 04:55:48.629466 env[1306]: 2025-10-29 04:55:48.580 [INFO][4986] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Oct 29 04:55:48.629466 env[1306]: 2025-10-29 04:55:48.613 [INFO][4993] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" HandleID="k8s-pod-network.eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0" Oct 29 04:55:48.629466 env[1306]: 2025-10-29 04:55:48.613 [INFO][4993] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:55:48.629466 env[1306]: 2025-10-29 04:55:48.613 [INFO][4993] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:55:48.629466 env[1306]: 2025-10-29 04:55:48.622 [WARNING][4993] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" HandleID="k8s-pod-network.eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0" Oct 29 04:55:48.629466 env[1306]: 2025-10-29 04:55:48.622 [INFO][4993] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" HandleID="k8s-pod-network.eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--lslb8-eth0" Oct 29 04:55:48.629466 env[1306]: 2025-10-29 04:55:48.625 [INFO][4993] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:55:48.629466 env[1306]: 2025-10-29 04:55:48.627 [INFO][4986] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1" Oct 29 04:55:48.630544 env[1306]: time="2025-10-29T04:55:48.629494070Z" level=info msg="TearDown network for sandbox \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\" successfully" Oct 29 04:55:48.633835 env[1306]: time="2025-10-29T04:55:48.633796418Z" level=info msg="RemovePodSandbox \"eab49bf327be5e620219503b2d2b379877821ada77cbd3855dd41b67a3ddc3a1\" returns successfully" Oct 29 04:55:48.634746 env[1306]: time="2025-10-29T04:55:48.634686731Z" level=info msg="StopPodSandbox for \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\"" Oct 29 04:55:48.730000 audit[4881]: USER_ACCT pid=4881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:48.740903 sshd[4881]: Accepted publickey for core from 147.75.109.163 port 57428 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 04:55:48.743650 kernel: audit: type=1101 audit(1761713748.730:509): pid=4881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:48.742000 audit[4881]: CRED_ACQ pid=4881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:48.744734 sshd[4881]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 04:55:48.753881 kernel: audit: type=1103 audit(1761713748.742:510): pid=4881 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:48.754035 kernel: audit: type=1006 audit(1761713748.742:511): pid=4881 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Oct 29 04:55:48.742000 audit[4881]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0eb1fc70 a2=3 a3=0 items=0 ppid=1 pid=4881 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:48.779425 kernel: audit: type=1300 audit(1761713748.742:511): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0eb1fc70 a2=3 a3=0 items=0 ppid=1 pid=4881 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:48.790051 kernel: audit: type=1327 audit(1761713748.742:511): proctitle=737368643A20636F7265205B707269765D Oct 29 04:55:48.742000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 04:55:48.799877 systemd[1]: Started session-15.scope. Oct 29 04:55:48.800220 systemd-logind[1290]: New session 15 of user core. Oct 29 04:55:48.803099 env[1306]: 2025-10-29 04:55:48.685 [WARNING][5008] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"14f0fbf4-8cf4-45f3-bb19-eb68dd03b78e", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e", Pod:"coredns-668d6bf9bc-s5c5l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie75e69b7c7f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:55:48.803099 env[1306]: 2025-10-29 04:55:48.686 [INFO][5008] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Oct 29 04:55:48.803099 env[1306]: 2025-10-29 04:55:48.686 [INFO][5008] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" iface="eth0" netns="" Oct 29 04:55:48.803099 env[1306]: 2025-10-29 04:55:48.686 [INFO][5008] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Oct 29 04:55:48.803099 env[1306]: 2025-10-29 04:55:48.686 [INFO][5008] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Oct 29 04:55:48.803099 env[1306]: 2025-10-29 04:55:48.719 [INFO][5015] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" HandleID="k8s-pod-network.7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0" Oct 29 04:55:48.803099 env[1306]: 2025-10-29 04:55:48.719 [INFO][5015] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:55:48.803099 env[1306]: 2025-10-29 04:55:48.719 [INFO][5015] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:55:48.803099 env[1306]: 2025-10-29 04:55:48.741 [WARNING][5015] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" HandleID="k8s-pod-network.7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0" Oct 29 04:55:48.803099 env[1306]: 2025-10-29 04:55:48.742 [INFO][5015] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" HandleID="k8s-pod-network.7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0" Oct 29 04:55:48.803099 env[1306]: 2025-10-29 04:55:48.782 [INFO][5015] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:55:48.803099 env[1306]: 2025-10-29 04:55:48.794 [INFO][5008] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Oct 29 04:55:48.805518 env[1306]: time="2025-10-29T04:55:48.803370013Z" level=info msg="TearDown network for sandbox \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\" successfully" Oct 29 04:55:48.805518 env[1306]: time="2025-10-29T04:55:48.803451979Z" level=info msg="StopPodSandbox for \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\" returns successfully" Oct 29 04:55:48.805518 env[1306]: time="2025-10-29T04:55:48.805132934Z" level=info msg="RemovePodSandbox for \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\"" Oct 29 04:55:48.805518 env[1306]: time="2025-10-29T04:55:48.805182627Z" level=info msg="Forcibly stopping sandbox \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\"" Oct 29 04:55:48.816000 audit[4881]: USER_START pid=4881 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:48.823000 audit[5027]: CRED_ACQ pid=5027 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:48.831567 kernel: audit: type=1105 audit(1761713748.816:512): pid=4881 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:48.831669 kernel: audit: type=1103 audit(1761713748.823:513): pid=5027 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:48.947889 env[1306]: 2025-10-29 04:55:48.881 [WARNING][5033] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"14f0fbf4-8cf4-45f3-bb19-eb68dd03b78e", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.October, 29, 4, 53, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xtjva.gb1.brightbox.com", ContainerID:"63f762bb86792b3b64fe17ebed94bdd32c48cf715405c78e118344c2abe8365e", Pod:"coredns-668d6bf9bc-s5c5l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.31.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie75e69b7c7f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 29 04:55:48.947889 env[1306]: 2025-10-29 04:55:48.882 [INFO][5033] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Oct 29 04:55:48.947889 env[1306]: 2025-10-29 04:55:48.882 [INFO][5033] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" iface="eth0" netns="" Oct 29 04:55:48.947889 env[1306]: 2025-10-29 04:55:48.882 [INFO][5033] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Oct 29 04:55:48.947889 env[1306]: 2025-10-29 04:55:48.882 [INFO][5033] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Oct 29 04:55:48.947889 env[1306]: 2025-10-29 04:55:48.929 [INFO][5040] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" HandleID="k8s-pod-network.7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0" Oct 29 04:55:48.947889 env[1306]: 2025-10-29 04:55:48.929 [INFO][5040] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 29 04:55:48.947889 env[1306]: 2025-10-29 04:55:48.929 [INFO][5040] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 29 04:55:48.947889 env[1306]: 2025-10-29 04:55:48.939 [WARNING][5040] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" HandleID="k8s-pod-network.7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0" Oct 29 04:55:48.947889 env[1306]: 2025-10-29 04:55:48.939 [INFO][5040] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" HandleID="k8s-pod-network.7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Workload="srv--xtjva.gb1.brightbox.com-k8s-coredns--668d6bf9bc--s5c5l-eth0" Oct 29 04:55:48.947889 env[1306]: 2025-10-29 04:55:48.943 [INFO][5040] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 29 04:55:48.947889 env[1306]: 2025-10-29 04:55:48.945 [INFO][5033] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1" Oct 29 04:55:48.949270 env[1306]: time="2025-10-29T04:55:48.948191557Z" level=info msg="TearDown network for sandbox \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\" successfully" Oct 29 04:55:48.953538 env[1306]: time="2025-10-29T04:55:48.953471526Z" level=info msg="RemovePodSandbox \"7fda6f8b0d6fecbdd351eca44da1370ba34d653f774a4690c7a5fcf54d5561b1\" returns successfully" Oct 29 04:55:49.498539 sshd[4881]: pam_unix(sshd:session): session closed for user core Oct 29 04:55:49.502216 kubelet[2172]: E1029 04:55:49.502137 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tptz2" podUID="de4b152a-29bb-4b0c-a12c-2eda92dd0564" Oct 29 04:55:49.502000 audit[4881]: USER_END pid=4881 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:49.512510 kernel: audit: type=1106 audit(1761713749.502:514): pid=4881 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:49.507507 systemd[1]: sshd@16-10.230.24.246:22-147.75.109.163:57428.service: Deactivated successfully. Oct 29 04:55:49.508754 systemd[1]: session-15.scope: Deactivated successfully. Oct 29 04:55:49.503000 audit[4881]: CRED_DISP pid=4881 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:49.513830 systemd-logind[1290]: Session 15 logged out. Waiting for processes to exit. Oct 29 04:55:49.520904 kernel: audit: type=1104 audit(1761713749.503:515): pid=4881 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:49.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.24.246:22-147.75.109.163:57428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:49.522745 systemd-logind[1290]: Removed session 15. Oct 29 04:55:49.643884 systemd[1]: Started sshd@17-10.230.24.246:22-147.75.109.163:57438.service. Oct 29 04:55:49.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.24.246:22-147.75.109.163:57438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:50.551000 audit[5057]: USER_ACCT pid=5057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:50.553152 sshd[5057]: Accepted publickey for core from 147.75.109.163 port 57438 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 04:55:50.553000 audit[5057]: CRED_ACQ pid=5057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:50.553000 audit[5057]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb1d970a0 a2=3 a3=0 items=0 ppid=1 pid=5057 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:50.553000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 04:55:50.556065 sshd[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 04:55:50.564722 systemd[1]: Started session-16.scope. Oct 29 04:55:50.565030 systemd-logind[1290]: New session 16 of user core. Oct 29 04:55:50.577000 audit[5057]: USER_START pid=5057 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:50.579000 audit[5060]: CRED_ACQ pid=5060 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:51.820708 sshd[5057]: pam_unix(sshd:session): session closed for user core Oct 29 04:55:51.823000 audit[5057]: USER_END pid=5057 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:51.824000 audit[5057]: CRED_DISP pid=5057 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:51.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.24.246:22-147.75.109.163:57438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:51.829072 systemd[1]: sshd@17-10.230.24.246:22-147.75.109.163:57438.service: Deactivated successfully. Oct 29 04:55:51.830666 systemd[1]: session-16.scope: Deactivated successfully. Oct 29 04:55:51.832359 systemd-logind[1290]: Session 16 logged out. Waiting for processes to exit. Oct 29 04:55:51.833809 systemd-logind[1290]: Removed session 16. Oct 29 04:55:51.965822 systemd[1]: Started sshd@18-10.230.24.246:22-147.75.109.163:46828.service. Oct 29 04:55:51.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.24.246:22-147.75.109.163:46828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:52.885000 audit[5068]: USER_ACCT pid=5068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:52.890808 sshd[5068]: Accepted publickey for core from 147.75.109.163 port 46828 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 04:55:52.891775 kernel: kauditd_printk_skb: 13 callbacks suppressed Oct 29 04:55:52.891920 kernel: audit: type=1101 audit(1761713752.885:527): pid=5068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:52.897000 audit[5068]: CRED_ACQ pid=5068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:52.899325 sshd[5068]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 04:55:52.905421 kernel: audit: type=1103 audit(1761713752.897:528): pid=5068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:52.911518 kernel: audit: type=1006 audit(1761713752.897:529): pid=5068 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Oct 29 04:55:52.911875 kernel: audit: type=1300 audit(1761713752.897:529): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1ee00060 a2=3 a3=0 items=0 ppid=1 pid=5068 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:52.897000 audit[5068]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1ee00060 a2=3 a3=0 items=0 ppid=1 pid=5068 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:52.897000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 04:55:52.920408 kernel: audit: type=1327 audit(1761713752.897:529): proctitle=737368643A20636F7265205B707269765D Oct 29 04:55:52.924455 systemd-logind[1290]: New session 17 of user core. Oct 29 04:55:52.924981 systemd[1]: Started session-17.scope. Oct 29 04:55:52.933000 audit[5068]: USER_START pid=5068 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:52.941000 audit[5071]: CRED_ACQ pid=5071 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:52.949273 kernel: audit: type=1105 audit(1761713752.933:530): pid=5068 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:52.949446 kernel: audit: type=1103 audit(1761713752.941:531): pid=5071 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:53.491572 kubelet[2172]: E1029 04:55:53.490450 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4pd8r" podUID="4d534185-a9e7-4c27-807c-917c6d4b755f" Oct 29 04:55:54.317000 audit[5081]: NETFILTER_CFG table=filter:127 family=2 entries=26 op=nft_register_rule pid=5081 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:55:54.317000 audit[5081]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff3304d2a0 a2=0 a3=7fff3304d28c items=0 ppid=2278 pid=5081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:54.336132 kernel: audit: type=1325 audit(1761713754.317:532): table=filter:127 family=2 entries=26 op=nft_register_rule pid=5081 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:55:54.336325 kernel: audit: type=1300 audit(1761713754.317:532): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fff3304d2a0 a2=0 a3=7fff3304d28c items=0 ppid=2278 pid=5081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:54.317000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:55:54.341405 kernel: audit: type=1327 audit(1761713754.317:532): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:55:54.341000 audit[5081]: NETFILTER_CFG table=nat:128 family=2 entries=20 op=nft_register_rule pid=5081 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:55:54.341000 audit[5081]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff3304d2a0 a2=0 a3=0 items=0 ppid=2278 pid=5081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:54.341000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:55:54.365000 audit[5083]: NETFILTER_CFG table=filter:129 family=2 entries=38 op=nft_register_rule pid=5083 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:55:54.365000 audit[5083]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffdfc5dd440 a2=0 a3=7ffdfc5dd42c items=0 ppid=2278 pid=5083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:54.365000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:55:54.368000 audit[5083]: NETFILTER_CFG table=nat:130 family=2 entries=20 op=nft_register_rule pid=5083 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:55:54.368000 audit[5083]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdfc5dd440 a2=0 a3=0 items=0 ppid=2278 pid=5083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:54.368000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:55:54.410048 sshd[5068]: pam_unix(sshd:session): session closed for user core Oct 29 04:55:54.412000 audit[5068]: USER_END pid=5068 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:54.412000 audit[5068]: CRED_DISP pid=5068 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:54.416231 systemd[1]: sshd@18-10.230.24.246:22-147.75.109.163:46828.service: Deactivated successfully. Oct 29 04:55:54.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.24.246:22-147.75.109.163:46828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:54.418446 systemd[1]: session-17.scope: Deactivated successfully. Oct 29 04:55:54.418490 systemd-logind[1290]: Session 17 logged out. Waiting for processes to exit. Oct 29 04:55:54.420116 systemd-logind[1290]: Removed session 17. Oct 29 04:55:54.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.24.246:22-147.75.109.163:46830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:54.556047 systemd[1]: Started sshd@19-10.230.24.246:22-147.75.109.163:46830.service. Oct 29 04:55:55.469000 audit[5086]: USER_ACCT pid=5086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:55.471508 sshd[5086]: Accepted publickey for core from 147.75.109.163 port 46830 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 04:55:55.471000 audit[5086]: CRED_ACQ pid=5086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:55.471000 audit[5086]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcdbd7ec90 a2=3 a3=0 items=0 ppid=1 pid=5086 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:55.471000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 04:55:55.474148 sshd[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 04:55:55.482308 systemd-logind[1290]: New session 18 of user core. Oct 29 04:55:55.483353 systemd[1]: Started session-18.scope. Oct 29 04:55:55.490597 kubelet[2172]: E1029 04:55:55.490041 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-678d6449b5-8q748" podUID="c95faceb-3919-455c-bd6c-4a68d6375a6d" Oct 29 04:55:55.500000 audit[5086]: USER_START pid=5086 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:55.507000 audit[5089]: CRED_ACQ pid=5089 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:56.474514 sshd[5086]: pam_unix(sshd:session): session closed for user core Oct 29 04:55:56.475000 audit[5086]: USER_END pid=5086 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:56.475000 audit[5086]: CRED_DISP pid=5086 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:56.479081 systemd[1]: sshd@19-10.230.24.246:22-147.75.109.163:46830.service: Deactivated successfully. Oct 29 04:55:56.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.24.246:22-147.75.109.163:46830 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:56.480984 systemd-logind[1290]: Session 18 logged out. Waiting for processes to exit. Oct 29 04:55:56.481001 systemd[1]: session-18.scope: Deactivated successfully. Oct 29 04:55:56.482951 systemd-logind[1290]: Removed session 18. Oct 29 04:55:56.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.24.246:22-147.75.109.163:46840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:56.624278 systemd[1]: Started sshd@20-10.230.24.246:22-147.75.109.163:46840.service. Oct 29 04:55:57.542000 audit[5097]: USER_ACCT pid=5097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:57.544427 sshd[5097]: Accepted publickey for core from 147.75.109.163 port 46840 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 04:55:57.544000 audit[5097]: CRED_ACQ pid=5097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:57.544000 audit[5097]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd38be58c0 a2=3 a3=0 items=0 ppid=1 pid=5097 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:55:57.544000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 04:55:57.546415 sshd[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 04:55:57.553603 systemd-logind[1290]: New session 19 of user core. Oct 29 04:55:57.554837 systemd[1]: Started session-19.scope. Oct 29 04:55:57.562000 audit[5097]: USER_START pid=5097 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:57.564000 audit[5100]: CRED_ACQ pid=5100 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:58.278871 sshd[5097]: pam_unix(sshd:session): session closed for user core Oct 29 04:55:58.294362 kernel: kauditd_printk_skb: 31 callbacks suppressed Oct 29 04:55:58.294551 kernel: audit: type=1106 audit(1761713758.279:554): pid=5097 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:58.279000 audit[5097]: USER_END pid=5097 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:58.287261 systemd-logind[1290]: Session 19 logged out. Waiting for processes to exit. Oct 29 04:55:58.289599 systemd[1]: sshd@20-10.230.24.246:22-147.75.109.163:46840.service: Deactivated successfully. Oct 29 04:55:58.290966 systemd[1]: session-19.scope: Deactivated successfully. Oct 29 04:55:58.294073 systemd-logind[1290]: Removed session 19. Oct 29 04:55:58.279000 audit[5097]: CRED_DISP pid=5097 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:58.308899 kernel: audit: type=1104 audit(1761713758.279:555): pid=5097 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:55:58.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.24.246:22-147.75.109.163:46840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:58.315662 kernel: audit: type=1131 audit(1761713758.288:556): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.24.246:22-147.75.109.163:46840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:55:58.487791 kubelet[2172]: E1029 04:55:58.487706 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-678d6449b5-m8bcr" podUID="b4da2a97-feea-487c-8384-a94163380e6f" Oct 29 04:55:59.486078 kubelet[2172]: E1029 04:55:59.485998 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-865b7496cf-28bh8" podUID="9c2b379e-441a-4610-b0bd-30a6fa391f82" Oct 29 04:56:01.007000 audit[5117]: NETFILTER_CFG table=filter:131 family=2 entries=26 op=nft_register_rule pid=5117 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:56:01.016513 kernel: audit: type=1325 audit(1761713761.007:557): table=filter:131 family=2 entries=26 op=nft_register_rule pid=5117 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:56:01.016627 kernel: audit: type=1300 audit(1761713761.007:557): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcb87822f0 a2=0 a3=7ffcb87822dc items=0 ppid=2278 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:56:01.007000 audit[5117]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffcb87822f0 a2=0 a3=7ffcb87822dc items=0 ppid=2278 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:56:01.007000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:56:01.028392 kernel: audit: type=1327 audit(1761713761.007:557): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:56:01.027000 audit[5117]: NETFILTER_CFG table=nat:132 family=2 entries=104 op=nft_register_chain pid=5117 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:56:01.027000 audit[5117]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffcb87822f0 a2=0 a3=7ffcb87822dc items=0 ppid=2278 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:56:01.042402 kernel: audit: type=1325 audit(1761713761.027:558): table=nat:132 family=2 entries=104 op=nft_register_chain pid=5117 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 29 04:56:01.042507 kernel: audit: type=1300 audit(1761713761.027:558): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffcb87822f0 a2=0 a3=7ffcb87822dc items=0 ppid=2278 pid=5117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:56:01.042559 kernel: audit: type=1327 audit(1761713761.027:558): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:56:01.027000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 29 04:56:01.495753 env[1306]: time="2025-10-29T04:56:01.495017162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 29 04:56:01.819823 env[1306]: time="2025-10-29T04:56:01.819615879Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:56:01.820990 env[1306]: time="2025-10-29T04:56:01.820897285Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 29 04:56:01.823934 kubelet[2172]: E1029 04:56:01.823852 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 04:56:01.825531 kubelet[2172]: E1029 04:56:01.825479 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 29 04:56:01.825921 kubelet[2172]: E1029 04:56:01.825837 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:1de7ade95d214bc29a765b1d29f494cd,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sxv89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8647577b76-v9phq_calico-system(419b3e19-fef1-48f6-b46c-276ff1e0b621): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 29 04:56:01.828746 env[1306]: time="2025-10-29T04:56:01.828251266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 29 04:56:02.204640 env[1306]: time="2025-10-29T04:56:02.204555604Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:56:02.206071 env[1306]: time="2025-10-29T04:56:02.205959357Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 29 04:56:02.206564 kubelet[2172]: E1029 04:56:02.206511 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 04:56:02.206760 kubelet[2172]: E1029 04:56:02.206724 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 29 04:56:02.207109 kubelet[2172]: E1029 04:56:02.207038 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sxv89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8647577b76-v9phq_calico-system(419b3e19-fef1-48f6-b46c-276ff1e0b621): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 29 04:56:02.208594 kubelet[2172]: E1029 04:56:02.208536 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8647577b76-v9phq" podUID="419b3e19-fef1-48f6-b46c-276ff1e0b621" Oct 29 04:56:02.485620 kubelet[2172]: E1029 04:56:02.485407 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tptz2" podUID="de4b152a-29bb-4b0c-a12c-2eda92dd0564" Oct 29 04:56:03.427117 systemd[1]: Started sshd@21-10.230.24.246:22-147.75.109.163:58512.service. Oct 29 04:56:03.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.24.246:22-147.75.109.163:58512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:56:03.437501 kernel: audit: type=1130 audit(1761713763.426:559): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.24.246:22-147.75.109.163:58512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:56:04.354573 sshd[5118]: Accepted publickey for core from 147.75.109.163 port 58512 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 04:56:04.353000 audit[5118]: USER_ACCT pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:04.365414 kernel: audit: type=1101 audit(1761713764.353:560): pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:04.366462 sshd[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 04:56:04.363000 audit[5118]: CRED_ACQ pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:04.375424 kernel: audit: type=1103 audit(1761713764.363:561): pid=5118 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:04.383106 systemd-logind[1290]: New session 20 of user core. Oct 29 04:56:04.384462 systemd[1]: Started session-20.scope. Oct 29 04:56:04.390409 kernel: audit: type=1006 audit(1761713764.364:562): pid=5118 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Oct 29 04:56:04.364000 audit[5118]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce2ce0fe0 a2=3 a3=0 items=0 ppid=1 pid=5118 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:56:04.399574 kernel: audit: type=1300 audit(1761713764.364:562): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce2ce0fe0 a2=3 a3=0 items=0 ppid=1 pid=5118 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:56:04.364000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 04:56:04.402000 audit[5118]: USER_START pid=5118 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:04.423876 kernel: audit: type=1327 audit(1761713764.364:562): proctitle=737368643A20636F7265205B707269765D Oct 29 04:56:04.424103 kernel: audit: type=1105 audit(1761713764.402:563): pid=5118 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:04.424151 kernel: audit: type=1103 audit(1761713764.406:564): pid=5121 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:04.406000 audit[5121]: CRED_ACQ pid=5121 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:05.086630 sshd[5118]: pam_unix(sshd:session): session closed for user core Oct 29 04:56:05.087000 audit[5118]: USER_END pid=5118 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:05.098711 kernel: audit: type=1106 audit(1761713765.087:565): pid=5118 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:05.087000 audit[5118]: CRED_DISP pid=5118 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:05.106153 kernel: audit: type=1104 audit(1761713765.087:566): pid=5118 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:05.105284 systemd-logind[1290]: Session 20 logged out. Waiting for processes to exit. Oct 29 04:56:05.105948 systemd[1]: sshd@21-10.230.24.246:22-147.75.109.163:58512.service: Deactivated successfully. Oct 29 04:56:05.107709 systemd[1]: session-20.scope: Deactivated successfully. Oct 29 04:56:05.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.24.246:22-147.75.109.163:58512 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:56:05.108515 systemd-logind[1290]: Removed session 20. Oct 29 04:56:07.502079 env[1306]: time="2025-10-29T04:56:07.502003294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 29 04:56:07.825846 env[1306]: time="2025-10-29T04:56:07.825612398Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:56:07.826928 env[1306]: time="2025-10-29T04:56:07.826839031Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 29 04:56:07.827279 kubelet[2172]: E1029 04:56:07.827210 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 04:56:07.827877 kubelet[2172]: E1029 04:56:07.827300 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 29 04:56:07.827877 kubelet[2172]: E1029 04:56:07.827543 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r268s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4pd8r_calico-system(4d534185-a9e7-4c27-807c-917c6d4b755f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 29 04:56:07.829191 kubelet[2172]: E1029 04:56:07.829148 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4pd8r" podUID="4d534185-a9e7-4c27-807c-917c6d4b755f" Oct 29 04:56:08.109494 systemd[1]: run-containerd-runc-k8s.io-71d6bc5d021ddcc10d95b2af898e8a201f0dc4a4a5e05b9cde0366fccf7e5962-runc.l1eRkB.mount: Deactivated successfully. Oct 29 04:56:09.497548 env[1306]: time="2025-10-29T04:56:09.496839761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 04:56:09.816851 env[1306]: time="2025-10-29T04:56:09.816473498Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:56:09.818465 env[1306]: time="2025-10-29T04:56:09.818309815Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 04:56:09.820990 kubelet[2172]: E1029 04:56:09.820894 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 04:56:09.821646 kubelet[2172]: E1029 04:56:09.821602 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 04:56:09.822599 kubelet[2172]: E1029 04:56:09.822510 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kcr87,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-678d6449b5-8q748_calico-apiserver(c95faceb-3919-455c-bd6c-4a68d6375a6d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 04:56:09.825420 kubelet[2172]: E1029 04:56:09.825355 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-678d6449b5-8q748" podUID="c95faceb-3919-455c-bd6c-4a68d6375a6d" Oct 29 04:56:10.233643 systemd[1]: Started sshd@22-10.230.24.246:22-147.75.109.163:47398.service. Oct 29 04:56:10.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.24.246:22-147.75.109.163:47398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:56:10.240991 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 29 04:56:10.241129 kernel: audit: type=1130 audit(1761713770.233:568): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.24.246:22-147.75.109.163:47398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:56:10.486016 env[1306]: time="2025-10-29T04:56:10.485529050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 29 04:56:10.486452 kubelet[2172]: E1029 04:56:10.486400 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-865b7496cf-28bh8" podUID="9c2b379e-441a-4610-b0bd-30a6fa391f82" Oct 29 04:56:10.796297 env[1306]: time="2025-10-29T04:56:10.796111955Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:56:10.797725 env[1306]: time="2025-10-29T04:56:10.797633454Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 29 04:56:10.798081 kubelet[2172]: E1029 04:56:10.798006 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 04:56:10.798482 kubelet[2172]: E1029 04:56:10.798099 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 29 04:56:10.798482 kubelet[2172]: E1029 04:56:10.798348 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nm8fq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-678d6449b5-m8bcr_calico-apiserver(b4da2a97-feea-487c-8384-a94163380e6f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 29 04:56:10.800230 kubelet[2172]: E1029 04:56:10.800193 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-678d6449b5-m8bcr" podUID="b4da2a97-feea-487c-8384-a94163380e6f" Oct 29 04:56:11.212000 audit[5154]: USER_ACCT pid=5154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:11.215340 sshd[5154]: Accepted publickey for core from 147.75.109.163 port 47398 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 04:56:11.222436 kernel: audit: type=1101 audit(1761713771.212:569): pid=5154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:11.222000 audit[5154]: CRED_ACQ pid=5154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:11.234873 kernel: audit: type=1103 audit(1761713771.222:570): pid=5154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:11.234975 kernel: audit: type=1006 audit(1761713771.223:571): pid=5154 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Oct 29 04:56:11.223000 audit[5154]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd57bbf260 a2=3 a3=0 items=0 ppid=1 pid=5154 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:56:11.236181 sshd[5154]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 04:56:11.243392 kernel: audit: type=1300 audit(1761713771.223:571): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd57bbf260 a2=3 a3=0 items=0 ppid=1 pid=5154 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:56:11.223000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 04:56:11.251474 kernel: audit: type=1327 audit(1761713771.223:571): proctitle=737368643A20636F7265205B707269765D Oct 29 04:56:11.259981 systemd-logind[1290]: New session 21 of user core. Oct 29 04:56:11.261167 systemd[1]: Started session-21.scope. Oct 29 04:56:11.283626 kernel: audit: type=1105 audit(1761713771.274:572): pid=5154 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:11.274000 audit[5154]: USER_START pid=5154 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:11.275000 audit[5157]: CRED_ACQ pid=5157 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:11.295666 kernel: audit: type=1103 audit(1761713771.275:573): pid=5157 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:12.267758 sshd[5154]: pam_unix(sshd:session): session closed for user core Oct 29 04:56:12.283443 kernel: audit: type=1106 audit(1761713772.269:574): pid=5154 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:12.269000 audit[5154]: USER_END pid=5154 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:12.275399 systemd[1]: sshd@22-10.230.24.246:22-147.75.109.163:47398.service: Deactivated successfully. Oct 29 04:56:12.276951 systemd[1]: session-21.scope: Deactivated successfully. Oct 29 04:56:12.286947 systemd-logind[1290]: Session 21 logged out. Waiting for processes to exit. Oct 29 04:56:12.289412 systemd-logind[1290]: Removed session 21. Oct 29 04:56:12.297757 kernel: audit: type=1104 audit(1761713772.270:575): pid=5154 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:12.270000 audit[5154]: CRED_DISP pid=5154 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:12.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.24.246:22-147.75.109.163:47398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:56:16.487299 kubelet[2172]: E1029 04:56:16.487233 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8647577b76-v9phq" podUID="419b3e19-fef1-48f6-b46c-276ff1e0b621" Oct 29 04:56:17.420738 systemd[1]: Started sshd@23-10.230.24.246:22-147.75.109.163:47410.service. Oct 29 04:56:17.448064 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 29 04:56:17.448177 kernel: audit: type=1130 audit(1761713777.420:577): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.24.246:22-147.75.109.163:47410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:56:17.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.24.246:22-147.75.109.163:47410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:56:17.492135 env[1306]: time="2025-10-29T04:56:17.492057748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 29 04:56:17.833576 env[1306]: time="2025-10-29T04:56:17.833324800Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:56:17.834815 env[1306]: time="2025-10-29T04:56:17.834747866Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 29 04:56:17.835247 kubelet[2172]: E1029 04:56:17.835104 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 04:56:17.835757 kubelet[2172]: E1029 04:56:17.835239 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 29 04:56:17.838429 kubelet[2172]: E1029 04:56:17.838221 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdjn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tptz2_calico-system(de4b152a-29bb-4b0c-a12c-2eda92dd0564): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 29 04:56:17.841456 env[1306]: time="2025-10-29T04:56:17.840655874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 29 04:56:18.211364 env[1306]: time="2025-10-29T04:56:18.211249934Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 29 04:56:18.213003 env[1306]: time="2025-10-29T04:56:18.212941904Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 29 04:56:18.213307 kubelet[2172]: E1029 04:56:18.213251 2172 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 04:56:18.213666 kubelet[2172]: E1029 04:56:18.213461 2172 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 29 04:56:18.214305 kubelet[2172]: E1029 04:56:18.214234 2172 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kdjn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tptz2_calico-system(de4b152a-29bb-4b0c-a12c-2eda92dd0564): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 29 04:56:18.215777 kubelet[2172]: E1029 04:56:18.215713 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tptz2" podUID="de4b152a-29bb-4b0c-a12c-2eda92dd0564" Oct 29 04:56:18.485061 kubelet[2172]: E1029 04:56:18.484854 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4pd8r" podUID="4d534185-a9e7-4c27-807c-917c6d4b755f" Oct 29 04:56:18.498336 kernel: audit: type=1101 audit(1761713778.486:578): pid=5187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:18.486000 audit[5187]: USER_ACCT pid=5187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:18.498666 sshd[5187]: Accepted publickey for core from 147.75.109.163 port 47410 ssh2: RSA SHA256:ZzxZ37pC6YJySS9q7Vi2CaqOM6Jn/4IZMTu+T8q4mXw Oct 29 04:56:18.499321 sshd[5187]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 04:56:18.498000 audit[5187]: CRED_ACQ pid=5187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:18.509839 kernel: audit: type=1103 audit(1761713778.498:579): pid=5187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:18.518869 kernel: audit: type=1006 audit(1761713778.498:580): pid=5187 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Oct 29 04:56:18.517806 systemd-logind[1290]: New session 22 of user core. Oct 29 04:56:18.518995 systemd[1]: Started session-22.scope. Oct 29 04:56:18.532539 kernel: audit: type=1300 audit(1761713778.498:580): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef68ca990 a2=3 a3=0 items=0 ppid=1 pid=5187 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:56:18.498000 audit[5187]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef68ca990 a2=3 a3=0 items=0 ppid=1 pid=5187 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 04:56:18.498000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 29 04:56:18.544438 kernel: audit: type=1327 audit(1761713778.498:580): proctitle=737368643A20636F7265205B707269765D Oct 29 04:56:18.537000 audit[5187]: USER_START pid=5187 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:18.552562 kernel: audit: type=1105 audit(1761713778.537:581): pid=5187 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:18.541000 audit[5190]: CRED_ACQ pid=5190 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:18.560418 kernel: audit: type=1103 audit(1761713778.541:582): pid=5190 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:19.423552 sshd[5187]: pam_unix(sshd:session): session closed for user core Oct 29 04:56:19.437585 kernel: audit: type=1106 audit(1761713779.425:583): pid=5187 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:19.425000 audit[5187]: USER_END pid=5187 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:19.437623 systemd[1]: sshd@23-10.230.24.246:22-147.75.109.163:47410.service: Deactivated successfully. Oct 29 04:56:19.425000 audit[5187]: CRED_DISP pid=5187 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:19.445440 kernel: audit: type=1104 audit(1761713779.425:584): pid=5187 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 29 04:56:19.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.24.246:22-147.75.109.163:47410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 04:56:19.446065 systemd[1]: session-22.scope: Deactivated successfully. Oct 29 04:56:19.446094 systemd-logind[1290]: Session 22 logged out. Waiting for processes to exit. Oct 29 04:56:19.448407 systemd-logind[1290]: Removed session 22. Oct 29 04:56:21.486952 kubelet[2172]: E1029 04:56:21.486888 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-678d6449b5-8q748" podUID="c95faceb-3919-455c-bd6c-4a68d6375a6d" Oct 29 04:56:21.491922 kubelet[2172]: E1029 04:56:21.491058 2172 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-678d6449b5-m8bcr" podUID="b4da2a97-feea-487c-8384-a94163380e6f"