Oct 31 05:42:14.950892 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Oct 30 23:32:41 -00 2025 Oct 31 05:42:14.950935 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=7605c743a37b990723033788c91d5dcda748347858877b1088098370c2a7e4d3 Oct 31 05:42:14.950953 kernel: BIOS-provided physical RAM map: Oct 31 05:42:14.950964 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 31 05:42:14.950974 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 31 05:42:14.950983 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 31 05:42:14.950995 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Oct 31 05:42:14.951005 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Oct 31 05:42:14.951015 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 31 05:42:14.951025 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 31 05:42:14.951039 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 31 05:42:14.951049 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 31 05:42:14.951059 kernel: NX (Execute Disable) protection: active Oct 31 05:42:14.951069 kernel: SMBIOS 2.8 present. Oct 31 05:42:14.951082 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Oct 31 05:42:14.951093 kernel: Hypervisor detected: KVM Oct 31 05:42:14.951108 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 31 05:42:14.951129 kernel: kvm-clock: cpu 0, msr 2d1a0001, primary cpu clock Oct 31 05:42:14.956168 kernel: kvm-clock: using sched offset of 4769515551 cycles Oct 31 05:42:14.956196 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 31 05:42:14.956209 kernel: tsc: Detected 2499.998 MHz processor Oct 31 05:42:14.956221 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 31 05:42:14.956233 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 31 05:42:14.956244 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Oct 31 05:42:14.956256 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 31 05:42:14.956274 kernel: Using GB pages for direct mapping Oct 31 05:42:14.956285 kernel: ACPI: Early table checksum verification disabled Oct 31 05:42:14.956297 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Oct 31 05:42:14.956308 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 05:42:14.956319 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 05:42:14.956330 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 05:42:14.956341 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Oct 31 05:42:14.956361 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 05:42:14.956375 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 05:42:14.956391 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 05:42:14.956402 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 05:42:14.956413 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Oct 31 05:42:14.956424 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Oct 31 05:42:14.956435 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Oct 31 05:42:14.956446 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Oct 31 05:42:14.956463 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Oct 31 05:42:14.956478 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Oct 31 05:42:14.956490 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Oct 31 05:42:14.956502 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 31 05:42:14.956513 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 31 05:42:14.956525 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Oct 31 05:42:14.956536 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Oct 31 05:42:14.956548 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Oct 31 05:42:14.956563 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Oct 31 05:42:14.956575 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Oct 31 05:42:14.956586 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Oct 31 05:42:14.956598 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Oct 31 05:42:14.956609 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Oct 31 05:42:14.956621 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Oct 31 05:42:14.956632 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Oct 31 05:42:14.956644 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Oct 31 05:42:14.956655 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Oct 31 05:42:14.956667 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Oct 31 05:42:14.956682 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Oct 31 05:42:14.956694 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 31 05:42:14.956705 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 31 05:42:14.956717 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Oct 31 05:42:14.956729 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Oct 31 05:42:14.956741 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Oct 31 05:42:14.956753 kernel: Zone ranges: Oct 31 05:42:14.956764 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 31 05:42:14.956776 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Oct 31 05:42:14.956792 kernel: Normal empty Oct 31 05:42:14.956804 kernel: Movable zone start for each node Oct 31 05:42:14.956815 kernel: Early memory node ranges Oct 31 05:42:14.956827 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 31 05:42:14.956839 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Oct 31 05:42:14.956850 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Oct 31 05:42:14.956862 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 31 05:42:14.956874 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 31 05:42:14.956885 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Oct 31 05:42:14.956901 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 31 05:42:14.956912 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 31 05:42:14.956924 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 31 05:42:14.956936 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 31 05:42:14.956948 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 31 05:42:14.956959 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 31 05:42:14.956971 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 31 05:42:14.956983 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 31 05:42:14.956995 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 31 05:42:14.957010 kernel: TSC deadline timer available Oct 31 05:42:14.957022 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Oct 31 05:42:14.957034 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 31 05:42:14.957045 kernel: Booting paravirtualized kernel on KVM Oct 31 05:42:14.957057 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 31 05:42:14.957069 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Oct 31 05:42:14.957081 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Oct 31 05:42:14.957093 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Oct 31 05:42:14.957104 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Oct 31 05:42:14.957131 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Oct 31 05:42:14.958204 kernel: kvm-guest: PV spinlocks enabled Oct 31 05:42:14.958218 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 31 05:42:14.958230 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Oct 31 05:42:14.958242 kernel: Policy zone: DMA32 Oct 31 05:42:14.958255 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=7605c743a37b990723033788c91d5dcda748347858877b1088098370c2a7e4d3 Oct 31 05:42:14.958268 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 31 05:42:14.958280 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 31 05:42:14.958299 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 31 05:42:14.958311 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 31 05:42:14.958323 kernel: Memory: 1903832K/2096616K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47496K init, 4084K bss, 192524K reserved, 0K cma-reserved) Oct 31 05:42:14.958335 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Oct 31 05:42:14.958347 kernel: Kernel/User page tables isolation: enabled Oct 31 05:42:14.958371 kernel: ftrace: allocating 34614 entries in 136 pages Oct 31 05:42:14.958383 kernel: ftrace: allocated 136 pages with 2 groups Oct 31 05:42:14.958396 kernel: rcu: Hierarchical RCU implementation. Oct 31 05:42:14.958408 kernel: rcu: RCU event tracing is enabled. Oct 31 05:42:14.958425 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Oct 31 05:42:14.958437 kernel: Rude variant of Tasks RCU enabled. Oct 31 05:42:14.958449 kernel: Tracing variant of Tasks RCU enabled. Oct 31 05:42:14.958461 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 31 05:42:14.958473 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Oct 31 05:42:14.958485 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Oct 31 05:42:14.958497 kernel: random: crng init done Oct 31 05:42:14.958520 kernel: Console: colour VGA+ 80x25 Oct 31 05:42:14.958533 kernel: printk: console [tty0] enabled Oct 31 05:42:14.958545 kernel: printk: console [ttyS0] enabled Oct 31 05:42:14.958557 kernel: ACPI: Core revision 20210730 Oct 31 05:42:14.958569 kernel: APIC: Switch to symmetric I/O mode setup Oct 31 05:42:14.958585 kernel: x2apic enabled Oct 31 05:42:14.958597 kernel: Switched APIC routing to physical x2apic. Oct 31 05:42:14.958610 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Oct 31 05:42:14.958623 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Oct 31 05:42:14.958635 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 31 05:42:14.958651 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 31 05:42:14.958664 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 31 05:42:14.958676 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 31 05:42:14.958688 kernel: Spectre V2 : Mitigation: Retpolines Oct 31 05:42:14.958700 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 31 05:42:14.958712 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Oct 31 05:42:14.958725 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 31 05:42:14.958737 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Oct 31 05:42:14.958749 kernel: MDS: Mitigation: Clear CPU buffers Oct 31 05:42:14.958761 kernel: MMIO Stale Data: Unknown: No mitigations Oct 31 05:42:14.958773 kernel: SRBDS: Unknown: Dependent on hypervisor status Oct 31 05:42:14.958789 kernel: active return thunk: its_return_thunk Oct 31 05:42:14.958801 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 31 05:42:14.958813 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 31 05:42:14.958825 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 31 05:42:14.958838 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 31 05:42:14.958850 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 31 05:42:14.958862 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 31 05:42:14.958874 kernel: Freeing SMP alternatives memory: 32K Oct 31 05:42:14.958886 kernel: pid_max: default: 32768 minimum: 301 Oct 31 05:42:14.958898 kernel: LSM: Security Framework initializing Oct 31 05:42:14.958910 kernel: SELinux: Initializing. Oct 31 05:42:14.958926 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 31 05:42:14.958939 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 31 05:42:14.958951 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Oct 31 05:42:14.958963 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Oct 31 05:42:14.958976 kernel: signal: max sigframe size: 1776 Oct 31 05:42:14.959006 kernel: rcu: Hierarchical SRCU implementation. Oct 31 05:42:14.959018 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 31 05:42:14.959031 kernel: smp: Bringing up secondary CPUs ... Oct 31 05:42:14.959043 kernel: x86: Booting SMP configuration: Oct 31 05:42:14.959055 kernel: .... node #0, CPUs: #1 Oct 31 05:42:14.959072 kernel: kvm-clock: cpu 1, msr 2d1a0041, secondary cpu clock Oct 31 05:42:14.959084 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Oct 31 05:42:14.959096 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Oct 31 05:42:14.959109 kernel: smp: Brought up 1 node, 2 CPUs Oct 31 05:42:14.959132 kernel: smpboot: Max logical packages: 16 Oct 31 05:42:14.959145 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Oct 31 05:42:14.959158 kernel: devtmpfs: initialized Oct 31 05:42:14.959170 kernel: x86/mm: Memory block size: 128MB Oct 31 05:42:14.959182 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 31 05:42:14.959200 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Oct 31 05:42:14.959213 kernel: pinctrl core: initialized pinctrl subsystem Oct 31 05:42:14.959225 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 31 05:42:14.959237 kernel: audit: initializing netlink subsys (disabled) Oct 31 05:42:14.959250 kernel: audit: type=2000 audit(1761889333.612:1): state=initialized audit_enabled=0 res=1 Oct 31 05:42:14.959262 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 31 05:42:14.959275 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 31 05:42:14.959287 kernel: cpuidle: using governor menu Oct 31 05:42:14.959299 kernel: ACPI: bus type PCI registered Oct 31 05:42:14.959315 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 31 05:42:14.959328 kernel: dca service started, version 1.12.1 Oct 31 05:42:14.959340 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 31 05:42:14.959361 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Oct 31 05:42:14.959375 kernel: PCI: Using configuration type 1 for base access Oct 31 05:42:14.959388 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 31 05:42:14.959400 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 31 05:42:14.959412 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 31 05:42:14.959424 kernel: ACPI: Added _OSI(Module Device) Oct 31 05:42:14.959441 kernel: ACPI: Added _OSI(Processor Device) Oct 31 05:42:14.959453 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 31 05:42:14.959465 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 31 05:42:14.959478 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 31 05:42:14.959490 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 31 05:42:14.959502 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 31 05:42:14.959514 kernel: ACPI: Interpreter enabled Oct 31 05:42:14.959527 kernel: ACPI: PM: (supports S0 S5) Oct 31 05:42:14.959539 kernel: ACPI: Using IOAPIC for interrupt routing Oct 31 05:42:14.959555 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 31 05:42:14.959567 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 31 05:42:14.959579 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 31 05:42:14.959844 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 31 05:42:14.960009 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 31 05:42:14.960184 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 31 05:42:14.960203 kernel: PCI host bridge to bus 0000:00 Oct 31 05:42:14.960390 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 31 05:42:14.960536 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 31 05:42:14.960704 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 31 05:42:14.960873 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Oct 31 05:42:14.961014 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 31 05:42:14.961178 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Oct 31 05:42:14.961321 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 31 05:42:14.961539 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 31 05:42:14.961723 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Oct 31 05:42:14.961885 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Oct 31 05:42:14.962049 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Oct 31 05:42:14.962231 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Oct 31 05:42:14.962406 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 31 05:42:14.962584 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Oct 31 05:42:14.962843 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Oct 31 05:42:14.963020 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Oct 31 05:42:14.969280 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Oct 31 05:42:14.969480 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Oct 31 05:42:14.969648 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Oct 31 05:42:14.969820 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Oct 31 05:42:14.969987 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Oct 31 05:42:14.970175 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Oct 31 05:42:14.970337 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Oct 31 05:42:14.970516 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Oct 31 05:42:14.970674 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Oct 31 05:42:14.970864 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Oct 31 05:42:14.971047 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Oct 31 05:42:14.971232 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Oct 31 05:42:14.971404 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Oct 31 05:42:14.971570 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 31 05:42:14.971727 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Oct 31 05:42:14.971881 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Oct 31 05:42:14.972043 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Oct 31 05:42:14.972214 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Oct 31 05:42:14.972435 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 31 05:42:14.972633 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 31 05:42:14.972800 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Oct 31 05:42:14.972968 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Oct 31 05:42:14.973214 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 31 05:42:14.973417 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 31 05:42:14.973596 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 31 05:42:14.973773 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Oct 31 05:42:14.973941 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Oct 31 05:42:14.974135 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 31 05:42:14.974298 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 31 05:42:14.974484 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Oct 31 05:42:14.974672 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Oct 31 05:42:14.974832 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Oct 31 05:42:14.974986 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Oct 31 05:42:14.975164 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 31 05:42:14.975380 kernel: pci_bus 0000:02: extended config space not accessible Oct 31 05:42:14.975585 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Oct 31 05:42:14.975793 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Oct 31 05:42:14.975974 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Oct 31 05:42:14.984204 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 31 05:42:14.984435 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Oct 31 05:42:14.984625 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Oct 31 05:42:14.984792 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Oct 31 05:42:14.984959 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Oct 31 05:42:14.985116 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 31 05:42:14.985318 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Oct 31 05:42:14.985497 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Oct 31 05:42:14.985659 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Oct 31 05:42:14.985838 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Oct 31 05:42:14.986000 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 31 05:42:14.986178 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Oct 31 05:42:14.986371 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Oct 31 05:42:14.986529 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 31 05:42:14.986697 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Oct 31 05:42:14.986864 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Oct 31 05:42:14.987020 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 31 05:42:14.987210 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Oct 31 05:42:14.987387 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Oct 31 05:42:14.987543 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 31 05:42:14.987708 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Oct 31 05:42:14.987863 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Oct 31 05:42:14.988019 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 31 05:42:14.988190 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Oct 31 05:42:14.988349 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Oct 31 05:42:14.988520 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 31 05:42:14.988539 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 31 05:42:14.988552 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 31 05:42:14.988571 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 31 05:42:14.988584 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 31 05:42:14.988597 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 31 05:42:14.988610 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 31 05:42:14.988623 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 31 05:42:14.988635 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 31 05:42:14.988648 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 31 05:42:14.988660 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 31 05:42:14.988672 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 31 05:42:14.988689 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 31 05:42:14.988701 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 31 05:42:14.988714 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 31 05:42:14.988726 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 31 05:42:14.988739 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 31 05:42:14.988751 kernel: iommu: Default domain type: Translated Oct 31 05:42:14.988764 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 31 05:42:14.988918 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 31 05:42:14.989072 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 31 05:42:14.989257 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 31 05:42:14.989277 kernel: vgaarb: loaded Oct 31 05:42:14.989290 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 31 05:42:14.989302 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 31 05:42:14.989315 kernel: PTP clock support registered Oct 31 05:42:14.989327 kernel: PCI: Using ACPI for IRQ routing Oct 31 05:42:14.989340 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 31 05:42:14.989361 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 31 05:42:14.989381 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Oct 31 05:42:14.989394 kernel: clocksource: Switched to clocksource kvm-clock Oct 31 05:42:14.989407 kernel: VFS: Disk quotas dquot_6.6.0 Oct 31 05:42:14.989419 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 31 05:42:14.989432 kernel: pnp: PnP ACPI init Oct 31 05:42:14.989618 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 31 05:42:14.989639 kernel: pnp: PnP ACPI: found 5 devices Oct 31 05:42:14.989652 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 31 05:42:14.989665 kernel: NET: Registered PF_INET protocol family Oct 31 05:42:14.989683 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 31 05:42:14.989696 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 31 05:42:14.989709 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 31 05:42:14.989721 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 31 05:42:14.989734 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Oct 31 05:42:14.989747 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 31 05:42:14.989759 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 31 05:42:14.989772 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 31 05:42:14.989789 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 31 05:42:14.989801 kernel: NET: Registered PF_XDP protocol family Oct 31 05:42:14.989959 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Oct 31 05:42:14.990139 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Oct 31 05:42:14.990302 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Oct 31 05:42:14.990471 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Oct 31 05:42:14.990626 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 31 05:42:14.990788 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 31 05:42:14.990942 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 31 05:42:14.991099 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 31 05:42:15.004839 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Oct 31 05:42:15.005028 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Oct 31 05:42:15.005216 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Oct 31 05:42:15.005393 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Oct 31 05:42:15.005565 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Oct 31 05:42:15.005725 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Oct 31 05:42:15.005884 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Oct 31 05:42:15.006043 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Oct 31 05:42:15.006228 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Oct 31 05:42:15.006407 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 31 05:42:15.006566 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Oct 31 05:42:15.006723 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Oct 31 05:42:15.006888 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Oct 31 05:42:15.007068 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 31 05:42:15.007267 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Oct 31 05:42:15.007442 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Oct 31 05:42:15.007610 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Oct 31 05:42:15.007768 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 31 05:42:15.007928 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Oct 31 05:42:15.008098 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Oct 31 05:42:15.008287 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Oct 31 05:42:15.008478 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 31 05:42:15.008648 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Oct 31 05:42:15.008818 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Oct 31 05:42:15.008977 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Oct 31 05:42:15.009152 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 31 05:42:15.009309 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Oct 31 05:42:15.009484 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Oct 31 05:42:15.009653 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Oct 31 05:42:15.009827 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 31 05:42:15.009987 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Oct 31 05:42:15.010162 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Oct 31 05:42:15.010334 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Oct 31 05:42:15.010506 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 31 05:42:15.010694 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Oct 31 05:42:15.010885 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Oct 31 05:42:15.011056 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Oct 31 05:42:15.011290 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 31 05:42:15.011460 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Oct 31 05:42:15.011617 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Oct 31 05:42:15.011778 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Oct 31 05:42:15.011934 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 31 05:42:15.012082 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 31 05:42:15.012239 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 31 05:42:15.012394 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 31 05:42:15.012536 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Oct 31 05:42:15.012688 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 31 05:42:15.012839 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Oct 31 05:42:15.013003 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Oct 31 05:42:15.013188 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Oct 31 05:42:15.013373 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Oct 31 05:42:15.013537 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Oct 31 05:42:15.013725 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Oct 31 05:42:15.013885 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Oct 31 05:42:15.014045 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 31 05:42:15.014227 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Oct 31 05:42:15.014391 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Oct 31 05:42:15.014542 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 31 05:42:15.014709 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Oct 31 05:42:15.014859 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Oct 31 05:42:15.015013 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 31 05:42:15.015194 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Oct 31 05:42:15.015363 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Oct 31 05:42:15.015518 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 31 05:42:15.015676 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Oct 31 05:42:15.015826 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Oct 31 05:42:15.015975 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 31 05:42:15.016151 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Oct 31 05:42:15.016311 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Oct 31 05:42:15.016475 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 31 05:42:15.016658 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Oct 31 05:42:15.016822 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Oct 31 05:42:15.016971 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 31 05:42:15.016991 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 31 05:42:15.017005 kernel: PCI: CLS 0 bytes, default 64 Oct 31 05:42:15.017019 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 31 05:42:15.017039 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Oct 31 05:42:15.017052 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 31 05:42:15.017066 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Oct 31 05:42:15.017079 kernel: Initialise system trusted keyrings Oct 31 05:42:15.017092 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 31 05:42:15.017105 kernel: Key type asymmetric registered Oct 31 05:42:15.017133 kernel: Asymmetric key parser 'x509' registered Oct 31 05:42:15.017149 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 31 05:42:15.017163 kernel: io scheduler mq-deadline registered Oct 31 05:42:15.017182 kernel: io scheduler kyber registered Oct 31 05:42:15.017195 kernel: io scheduler bfq registered Oct 31 05:42:15.017373 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Oct 31 05:42:15.017547 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Oct 31 05:42:15.017718 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 05:42:15.017897 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Oct 31 05:42:15.018075 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Oct 31 05:42:15.024878 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 05:42:15.025054 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Oct 31 05:42:15.025238 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Oct 31 05:42:15.025413 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 05:42:15.025573 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Oct 31 05:42:15.025731 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Oct 31 05:42:15.025899 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 05:42:15.026059 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Oct 31 05:42:15.026232 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Oct 31 05:42:15.026403 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 05:42:15.026564 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Oct 31 05:42:15.026721 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Oct 31 05:42:15.026889 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 05:42:15.027049 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Oct 31 05:42:15.027227 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Oct 31 05:42:15.027398 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 05:42:15.027558 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Oct 31 05:42:15.027715 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Oct 31 05:42:15.027880 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 05:42:15.027901 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 31 05:42:15.027916 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 31 05:42:15.027929 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 31 05:42:15.027951 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 31 05:42:15.027964 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 31 05:42:15.027978 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 31 05:42:15.027997 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 31 05:42:15.028014 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 31 05:42:15.028028 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 31 05:42:15.033997 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 31 05:42:15.034182 kernel: rtc_cmos 00:03: registered as rtc0 Oct 31 05:42:15.034334 kernel: rtc_cmos 00:03: setting system clock to 2025-10-31T05:42:14 UTC (1761889334) Oct 31 05:42:15.034496 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Oct 31 05:42:15.034516 kernel: intel_pstate: CPU model not supported Oct 31 05:42:15.034530 kernel: NET: Registered PF_INET6 protocol family Oct 31 05:42:15.034551 kernel: Segment Routing with IPv6 Oct 31 05:42:15.034564 kernel: In-situ OAM (IOAM) with IPv6 Oct 31 05:42:15.034577 kernel: NET: Registered PF_PACKET protocol family Oct 31 05:42:15.034591 kernel: Key type dns_resolver registered Oct 31 05:42:15.034604 kernel: IPI shorthand broadcast: enabled Oct 31 05:42:15.034617 kernel: sched_clock: Marking stable (1010813921, 227554754)->(1537476836, -299108161) Oct 31 05:42:15.034631 kernel: registered taskstats version 1 Oct 31 05:42:15.034644 kernel: Loading compiled-in X.509 certificates Oct 31 05:42:15.034657 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 8306d4e745b00e76b5fae2596c709096b7f28adc' Oct 31 05:42:15.034675 kernel: Key type .fscrypt registered Oct 31 05:42:15.034687 kernel: Key type fscrypt-provisioning registered Oct 31 05:42:15.034701 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 31 05:42:15.034714 kernel: ima: Allocated hash algorithm: sha1 Oct 31 05:42:15.034727 kernel: ima: No architecture policies found Oct 31 05:42:15.034740 kernel: clk: Disabling unused clocks Oct 31 05:42:15.034753 kernel: Freeing unused kernel image (initmem) memory: 47496K Oct 31 05:42:15.034766 kernel: Write protecting the kernel read-only data: 28672k Oct 31 05:42:15.034784 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 31 05:42:15.034797 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Oct 31 05:42:15.034810 kernel: Run /init as init process Oct 31 05:42:15.034823 kernel: with arguments: Oct 31 05:42:15.034836 kernel: /init Oct 31 05:42:15.034850 kernel: with environment: Oct 31 05:42:15.034862 kernel: HOME=/ Oct 31 05:42:15.034875 kernel: TERM=linux Oct 31 05:42:15.034888 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 31 05:42:15.034913 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 31 05:42:15.034937 systemd[1]: Detected virtualization kvm. Oct 31 05:42:15.034951 systemd[1]: Detected architecture x86-64. Oct 31 05:42:15.034964 systemd[1]: Running in initrd. Oct 31 05:42:15.034978 systemd[1]: No hostname configured, using default hostname. Oct 31 05:42:15.034991 systemd[1]: Hostname set to . Oct 31 05:42:15.035005 systemd[1]: Initializing machine ID from VM UUID. Oct 31 05:42:15.035019 systemd[1]: Queued start job for default target initrd.target. Oct 31 05:42:15.035036 systemd[1]: Started systemd-ask-password-console.path. Oct 31 05:42:15.035050 systemd[1]: Reached target cryptsetup.target. Oct 31 05:42:15.035064 systemd[1]: Reached target paths.target. Oct 31 05:42:15.035078 systemd[1]: Reached target slices.target. Oct 31 05:42:15.035100 systemd[1]: Reached target swap.target. Oct 31 05:42:15.035113 systemd[1]: Reached target timers.target. Oct 31 05:42:15.035142 systemd[1]: Listening on iscsid.socket. Oct 31 05:42:15.035161 systemd[1]: Listening on iscsiuio.socket. Oct 31 05:42:15.035175 systemd[1]: Listening on systemd-journald-audit.socket. Oct 31 05:42:15.035195 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 31 05:42:15.035209 systemd[1]: Listening on systemd-journald.socket. Oct 31 05:42:15.035222 systemd[1]: Listening on systemd-networkd.socket. Oct 31 05:42:15.035236 systemd[1]: Listening on systemd-udevd-control.socket. Oct 31 05:42:15.035258 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 31 05:42:15.035271 systemd[1]: Reached target sockets.target. Oct 31 05:42:15.035285 systemd[1]: Starting kmod-static-nodes.service... Oct 31 05:42:15.035303 systemd[1]: Finished network-cleanup.service. Oct 31 05:42:15.035322 systemd[1]: Starting systemd-fsck-usr.service... Oct 31 05:42:15.035336 systemd[1]: Starting systemd-journald.service... Oct 31 05:42:15.035363 systemd[1]: Starting systemd-modules-load.service... Oct 31 05:42:15.035386 systemd[1]: Starting systemd-resolved.service... Oct 31 05:42:15.035400 systemd[1]: Starting systemd-vconsole-setup.service... Oct 31 05:42:15.035413 systemd[1]: Finished kmod-static-nodes.service. Oct 31 05:42:15.035427 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 31 05:42:15.035445 kernel: Bridge firewalling registered Oct 31 05:42:15.035472 systemd-journald[201]: Journal started Oct 31 05:42:15.035548 systemd-journald[201]: Runtime Journal (/run/log/journal/a23de79e896e47598bec33a5bfb3cda8) is 4.7M, max 38.1M, 33.3M free. Oct 31 05:42:14.953023 systemd-modules-load[202]: Inserted module 'overlay' Oct 31 05:42:15.005017 systemd-resolved[203]: Positive Trust Anchors: Oct 31 05:42:15.057688 systemd[1]: Started systemd-resolved.service. Oct 31 05:42:15.057722 kernel: audit: type=1130 audit(1761889335.050:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:15.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:15.005036 systemd-resolved[203]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 05:42:15.005080 systemd-resolved[203]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 31 05:42:15.062900 kernel: SCSI subsystem initialized Oct 31 05:42:15.015296 systemd-resolved[203]: Defaulting to hostname 'linux'. Oct 31 05:42:15.035922 systemd-modules-load[202]: Inserted module 'br_netfilter' Oct 31 05:42:15.075925 systemd[1]: Started systemd-journald.service. Oct 31 05:42:15.075954 kernel: audit: type=1130 audit(1761889335.064:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:15.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:15.067882 systemd[1]: Finished systemd-fsck-usr.service. Oct 31 05:42:15.098705 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 31 05:42:15.098732 kernel: audit: type=1130 audit(1761889335.067:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:15.098751 kernel: device-mapper: uevent: version 1.0.3 Oct 31 05:42:15.098776 kernel: audit: type=1130 audit(1761889335.068:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:15.098794 kernel: audit: type=1130 audit(1761889335.069:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:15.098824 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 31 05:42:15.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:15.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:15.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:15.068718 systemd[1]: Finished systemd-vconsole-setup.service. Oct 31 05:42:15.069557 systemd[1]: Reached target nss-lookup.target. Oct 31 05:42:15.076438 systemd[1]: Starting dracut-cmdline-ask.service... Oct 31 05:42:15.099259 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 31 05:42:15.112861 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 31 05:42:15.113829 systemd-modules-load[202]: Inserted module 'dm_multipath' Oct 31 05:42:15.120928 kernel: audit: type=1130 audit(1761889335.113:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:15.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:15.114553 systemd[1]: Finished systemd-modules-load.service. Oct 31 05:42:15.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:15.123366 systemd[1]: Starting systemd-sysctl.service... Oct 31 05:42:15.128688 kernel: audit: type=1130 audit(1761889335.122:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:15.134570 systemd[1]: Finished dracut-cmdline-ask.service. Oct 31 05:42:15.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:15.137230 systemd[1]: Starting dracut-cmdline.service... Oct 31 05:42:15.143706 kernel: audit: type=1130 audit(1761889335.135:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:15.142347 systemd[1]: Finished systemd-sysctl.service. Oct 31 05:42:15.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:15.160264 dracut-cmdline[224]: dracut-dracut-053 Oct 31 05:42:15.160264 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=7605c743a37b990723033788c91d5dcda748347858877b1088098370c2a7e4d3 Oct 31 05:42:15.166464 kernel: audit: type=1130 audit(1761889335.142:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:15.243160 kernel: Loading iSCSI transport class v2.0-870. Oct 31 05:42:15.265159 kernel: iscsi: registered transport (tcp) Oct 31 05:42:15.294864 kernel: iscsi: registered transport (qla4xxx) Oct 31 05:42:15.294943 kernel: QLogic iSCSI HBA Driver Oct 31 05:42:15.342667 systemd[1]: Finished dracut-cmdline.service. Oct 31 05:42:15.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:15.344705 systemd[1]: Starting dracut-pre-udev.service... Oct 31 05:42:15.404228 kernel: raid6: sse2x4 gen() 13583 MB/s Oct 31 05:42:15.422209 kernel: raid6: sse2x4 xor() 7753 MB/s Oct 31 05:42:15.440246 kernel: raid6: sse2x2 gen() 9092 MB/s Oct 31 05:42:15.458160 kernel: raid6: sse2x2 xor() 7779 MB/s Oct 31 05:42:15.476264 kernel: raid6: sse2x1 gen() 9439 MB/s Oct 31 05:42:15.495010 kernel: raid6: sse2x1 xor() 7004 MB/s Oct 31 05:42:15.495062 kernel: raid6: using algorithm sse2x4 gen() 13583 MB/s Oct 31 05:42:15.495081 kernel: raid6: .... xor() 7753 MB/s, rmw enabled Oct 31 05:42:15.496336 kernel: raid6: using ssse3x2 recovery algorithm Oct 31 05:42:15.514182 kernel: xor: automatically using best checksumming function avx Oct 31 05:42:15.635281 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 31 05:42:15.647868 systemd[1]: Finished dracut-pre-udev.service. Oct 31 05:42:15.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:15.649000 audit: BPF prog-id=7 op=LOAD Oct 31 05:42:15.649000 audit: BPF prog-id=8 op=LOAD Oct 31 05:42:15.649834 systemd[1]: Starting systemd-udevd.service... Oct 31 05:42:15.667750 systemd-udevd[401]: Using default interface naming scheme 'v252'. Oct 31 05:42:15.677229 systemd[1]: Started systemd-udevd.service. Oct 31 05:42:15.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:15.682746 systemd[1]: Starting dracut-pre-trigger.service... Oct 31 05:42:15.700679 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Oct 31 05:42:15.741873 systemd[1]: Finished dracut-pre-trigger.service. Oct 31 05:42:15.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:15.743759 systemd[1]: Starting systemd-udev-trigger.service... Oct 31 05:42:15.840647 systemd[1]: Finished systemd-udev-trigger.service. Oct 31 05:42:15.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:15.944183 kernel: cryptd: max_cpu_qlen set to 1000 Oct 31 05:42:15.951150 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Oct 31 05:42:15.985772 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 31 05:42:15.985807 kernel: GPT:17805311 != 125829119 Oct 31 05:42:15.985832 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 31 05:42:15.985849 kernel: GPT:17805311 != 125829119 Oct 31 05:42:15.985872 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 31 05:42:15.985888 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 05:42:15.985906 kernel: AVX version of gcm_enc/dec engaged. Oct 31 05:42:15.985922 kernel: AES CTR mode by8 optimization enabled Oct 31 05:42:15.995903 kernel: ACPI: bus type USB registered Oct 31 05:42:15.995945 kernel: usbcore: registered new interface driver usbfs Oct 31 05:42:15.997431 kernel: usbcore: registered new interface driver hub Oct 31 05:42:15.998923 kernel: usbcore: registered new device driver usb Oct 31 05:42:16.022244 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 31 05:42:16.170019 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (446) Oct 31 05:42:16.170069 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Oct 31 05:42:16.170491 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Oct 31 05:42:16.170740 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 31 05:42:16.170931 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Oct 31 05:42:16.171173 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Oct 31 05:42:16.171384 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Oct 31 05:42:16.171566 kernel: hub 1-0:1.0: USB hub found Oct 31 05:42:16.171817 kernel: hub 1-0:1.0: 4 ports detected Oct 31 05:42:16.172019 kernel: libata version 3.00 loaded. Oct 31 05:42:16.172039 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 31 05:42:16.172409 kernel: hub 2-0:1.0: USB hub found Oct 31 05:42:16.172631 kernel: hub 2-0:1.0: 4 ports detected Oct 31 05:42:16.172831 kernel: ahci 0000:00:1f.2: version 3.0 Oct 31 05:42:16.173038 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 31 05:42:16.173067 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 31 05:42:16.173260 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 31 05:42:16.173497 kernel: scsi host0: ahci Oct 31 05:42:16.173702 kernel: scsi host1: ahci Oct 31 05:42:16.173897 kernel: scsi host2: ahci Oct 31 05:42:16.174110 kernel: scsi host3: ahci Oct 31 05:42:16.174322 kernel: scsi host4: ahci Oct 31 05:42:16.174553 kernel: scsi host5: ahci Oct 31 05:42:16.174758 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Oct 31 05:42:16.174777 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Oct 31 05:42:16.174822 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Oct 31 05:42:16.174839 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Oct 31 05:42:16.174856 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Oct 31 05:42:16.174873 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Oct 31 05:42:16.186355 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 31 05:42:16.187188 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 31 05:42:16.193611 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 31 05:42:16.199273 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 31 05:42:16.201559 systemd[1]: Starting disk-uuid.service... Oct 31 05:42:16.211206 disk-uuid[528]: Primary Header is updated. Oct 31 05:42:16.211206 disk-uuid[528]: Secondary Entries is updated. Oct 31 05:42:16.211206 disk-uuid[528]: Secondary Header is updated. Oct 31 05:42:16.219055 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 05:42:16.223166 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 05:42:16.295264 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 31 05:42:16.398720 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 31 05:42:16.398817 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 31 05:42:16.405629 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 31 05:42:16.405672 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 31 05:42:16.406166 kernel: ata3: SATA link down (SStatus 0 SControl 300) Oct 31 05:42:16.409226 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 31 05:42:16.436155 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 31 05:42:16.443412 kernel: usbcore: registered new interface driver usbhid Oct 31 05:42:16.443451 kernel: usbhid: USB HID core driver Oct 31 05:42:16.450162 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Oct 31 05:42:16.454149 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Oct 31 05:42:17.223671 disk-uuid[529]: The operation has completed successfully. Oct 31 05:42:17.224720 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 05:42:17.288616 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 31 05:42:17.289410 systemd[1]: Finished disk-uuid.service. Oct 31 05:42:17.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:17.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:17.296312 systemd[1]: Starting verity-setup.service... Oct 31 05:42:17.316169 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Oct 31 05:42:17.380024 systemd[1]: Found device dev-mapper-usr.device. Oct 31 05:42:17.382513 systemd[1]: Mounting sysusr-usr.mount... Oct 31 05:42:17.384354 systemd[1]: Finished verity-setup.service. Oct 31 05:42:17.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:17.485226 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 31 05:42:17.485994 systemd[1]: Mounted sysusr-usr.mount. Oct 31 05:42:17.486884 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 31 05:42:17.488168 systemd[1]: Starting ignition-setup.service... Oct 31 05:42:17.490038 systemd[1]: Starting parse-ip-for-networkd.service... Oct 31 05:42:17.512189 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 05:42:17.512245 kernel: BTRFS info (device vda6): using free space tree Oct 31 05:42:17.512265 kernel: BTRFS info (device vda6): has skinny extents Oct 31 05:42:17.536301 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 31 05:42:17.544416 systemd[1]: Finished ignition-setup.service. Oct 31 05:42:17.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:17.550709 systemd[1]: Starting ignition-fetch-offline.service... Oct 31 05:42:17.663164 systemd[1]: Finished parse-ip-for-networkd.service. Oct 31 05:42:17.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:17.665000 audit: BPF prog-id=9 op=LOAD Oct 31 05:42:17.666612 systemd[1]: Starting systemd-networkd.service... Oct 31 05:42:17.728409 ignition[639]: Ignition 2.14.0 Oct 31 05:42:17.728581 systemd-networkd[711]: lo: Link UP Oct 31 05:42:17.728588 systemd-networkd[711]: lo: Gained carrier Oct 31 05:42:17.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:17.729848 systemd-networkd[711]: Enumeration completed Oct 31 05:42:17.732522 ignition[639]: Stage: fetch-offline Oct 31 05:42:17.730268 systemd-networkd[711]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 05:42:17.732668 ignition[639]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 31 05:42:17.732183 systemd-networkd[711]: eth0: Link UP Oct 31 05:42:17.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:17.732719 ignition[639]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 31 05:42:17.732189 systemd-networkd[711]: eth0: Gained carrier Oct 31 05:42:17.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:17.734628 ignition[639]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 31 05:42:17.732552 systemd[1]: Started systemd-networkd.service. Oct 31 05:42:17.734803 ignition[639]: parsed url from cmdline: "" Oct 31 05:42:17.733835 systemd[1]: Reached target network.target. Oct 31 05:42:17.734811 ignition[639]: no config URL provided Oct 31 05:42:17.735996 systemd[1]: Starting iscsiuio.service... Oct 31 05:42:17.734831 ignition[639]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 05:42:17.772104 iscsid[717]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 31 05:42:17.772104 iscsid[717]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Oct 31 05:42:17.772104 iscsid[717]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 31 05:42:17.772104 iscsid[717]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 31 05:42:17.772104 iscsid[717]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 31 05:42:17.772104 iscsid[717]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 31 05:42:17.772104 iscsid[717]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 31 05:42:17.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:17.756289 systemd[1]: Started iscsiuio.service. Oct 31 05:42:17.734848 ignition[639]: no config at "/usr/lib/ignition/user.ign" Oct 31 05:42:17.757294 systemd[1]: Finished ignition-fetch-offline.service. Oct 31 05:42:17.734857 ignition[639]: failed to fetch config: resource requires networking Oct 31 05:42:17.759886 systemd[1]: Starting ignition-fetch.service... Oct 31 05:42:17.735076 ignition[639]: Ignition finished successfully Oct 31 05:42:17.762406 systemd[1]: Starting iscsid.service... Oct 31 05:42:17.773482 ignition[716]: Ignition 2.14.0 Oct 31 05:42:17.778615 systemd[1]: Started iscsid.service. Oct 31 05:42:17.773494 ignition[716]: Stage: fetch Oct 31 05:42:17.784612 systemd[1]: Starting dracut-initqueue.service... Oct 31 05:42:17.773658 ignition[716]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 31 05:42:17.793302 systemd-networkd[711]: eth0: DHCPv4 address 10.244.21.74/30, gateway 10.244.21.73 acquired from 10.244.21.73 Oct 31 05:42:17.773694 ignition[716]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 31 05:42:17.800723 systemd[1]: Finished dracut-initqueue.service. Oct 31 05:42:17.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:17.774998 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 31 05:42:17.801563 systemd[1]: Reached target remote-fs-pre.target. Oct 31 05:42:17.775177 ignition[716]: parsed url from cmdline: "" Oct 31 05:42:17.802875 systemd[1]: Reached target remote-cryptsetup.target. Oct 31 05:42:17.775186 ignition[716]: no config URL provided Oct 31 05:42:17.804456 systemd[1]: Reached target remote-fs.target. Oct 31 05:42:17.775197 ignition[716]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 05:42:17.807619 systemd[1]: Starting dracut-pre-mount.service... Oct 31 05:42:17.775214 ignition[716]: no config at "/usr/lib/ignition/user.ign" Oct 31 05:42:17.780162 ignition[716]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Oct 31 05:42:17.780292 ignition[716]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Oct 31 05:42:17.780334 ignition[716]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Oct 31 05:42:17.787006 ignition[716]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Oct 31 05:42:17.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:17.820662 systemd[1]: Finished dracut-pre-mount.service. Oct 31 05:42:17.988194 ignition[716]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Oct 31 05:42:18.006284 ignition[716]: GET result: OK Oct 31 05:42:18.006494 ignition[716]: parsing config with SHA512: 62469c2c23653384fe08c85f40c4205d29476247fea88a2e5ab527bed1937d369913a63bc2ee47d0ef3e22ec4d10a10fbed47fde0cc3152a85a26bd65cc07c89 Oct 31 05:42:18.021105 unknown[716]: fetched base config from "system" Oct 31 05:42:18.021142 unknown[716]: fetched base config from "system" Oct 31 05:42:18.021860 ignition[716]: fetch: fetch complete Oct 31 05:42:18.021153 unknown[716]: fetched user config from "openstack" Oct 31 05:42:18.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:18.021870 ignition[716]: fetch: fetch passed Oct 31 05:42:18.023819 systemd[1]: Finished ignition-fetch.service. Oct 31 05:42:18.021936 ignition[716]: Ignition finished successfully Oct 31 05:42:18.026079 systemd[1]: Starting ignition-kargs.service... Oct 31 05:42:18.039864 ignition[736]: Ignition 2.14.0 Oct 31 05:42:18.039884 ignition[736]: Stage: kargs Oct 31 05:42:18.040068 ignition[736]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 31 05:42:18.040115 ignition[736]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 31 05:42:18.041493 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 31 05:42:18.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:18.044601 systemd[1]: Finished ignition-kargs.service. Oct 31 05:42:18.043170 ignition[736]: kargs: kargs passed Oct 31 05:42:18.046997 systemd[1]: Starting ignition-disks.service... Oct 31 05:42:18.043245 ignition[736]: Ignition finished successfully Oct 31 05:42:18.059115 ignition[741]: Ignition 2.14.0 Oct 31 05:42:18.059157 ignition[741]: Stage: disks Oct 31 05:42:18.059370 ignition[741]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 31 05:42:18.059407 ignition[741]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 31 05:42:18.060772 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 31 05:42:18.062480 ignition[741]: disks: disks passed Oct 31 05:42:18.063832 systemd[1]: Finished ignition-disks.service. Oct 31 05:42:18.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:18.062552 ignition[741]: Ignition finished successfully Oct 31 05:42:18.065172 systemd[1]: Reached target initrd-root-device.target. Oct 31 05:42:18.066378 systemd[1]: Reached target local-fs-pre.target. Oct 31 05:42:18.067626 systemd[1]: Reached target local-fs.target. Oct 31 05:42:18.068976 systemd[1]: Reached target sysinit.target. Oct 31 05:42:18.070225 systemd[1]: Reached target basic.target. Oct 31 05:42:18.072972 systemd[1]: Starting systemd-fsck-root.service... Oct 31 05:42:18.095010 systemd-fsck[748]: ROOT: clean, 637/1628000 files, 124069/1617920 blocks Oct 31 05:42:18.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:18.099571 systemd[1]: Finished systemd-fsck-root.service. Oct 31 05:42:18.101449 systemd[1]: Mounting sysroot.mount... Oct 31 05:42:18.115170 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 31 05:42:18.116497 systemd[1]: Mounted sysroot.mount. Oct 31 05:42:18.117327 systemd[1]: Reached target initrd-root-fs.target. Oct 31 05:42:18.120338 systemd[1]: Mounting sysroot-usr.mount... Oct 31 05:42:18.121659 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 31 05:42:18.122720 systemd[1]: Starting flatcar-openstack-hostname.service... Oct 31 05:42:18.125972 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 31 05:42:18.126018 systemd[1]: Reached target ignition-diskful.target. Oct 31 05:42:18.130103 systemd[1]: Mounted sysroot-usr.mount. Oct 31 05:42:18.133788 systemd[1]: Starting initrd-setup-root.service... Oct 31 05:42:18.144412 initrd-setup-root[759]: cut: /sysroot/etc/passwd: No such file or directory Oct 31 05:42:18.160247 initrd-setup-root[767]: cut: /sysroot/etc/group: No such file or directory Oct 31 05:42:18.167975 initrd-setup-root[775]: cut: /sysroot/etc/shadow: No such file or directory Oct 31 05:42:18.179738 initrd-setup-root[784]: cut: /sysroot/etc/gshadow: No such file or directory Oct 31 05:42:18.251958 systemd[1]: Finished initrd-setup-root.service. Oct 31 05:42:18.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:18.256130 systemd[1]: Starting ignition-mount.service... Oct 31 05:42:18.263742 systemd[1]: Starting sysroot-boot.service... Oct 31 05:42:18.270580 coreos-metadata[754]: Oct 31 05:42:18.270 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 31 05:42:18.271976 bash[802]: umount: /sysroot/usr/share/oem: not mounted. Oct 31 05:42:18.289978 coreos-metadata[754]: Oct 31 05:42:18.289 INFO Fetch successful Oct 31 05:42:18.292148 coreos-metadata[754]: Oct 31 05:42:18.291 INFO wrote hostname srv-f2mor.gb1.brightbox.com to /sysroot/etc/hostname Oct 31 05:42:18.293208 ignition[803]: INFO : Ignition 2.14.0 Oct 31 05:42:18.293208 ignition[803]: INFO : Stage: mount Oct 31 05:42:18.293208 ignition[803]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 31 05:42:18.293208 ignition[803]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 31 05:42:18.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:18.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:18.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:18.298878 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Oct 31 05:42:18.304422 ignition[803]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 31 05:42:18.304422 ignition[803]: INFO : mount: mount passed Oct 31 05:42:18.304422 ignition[803]: INFO : Ignition finished successfully Oct 31 05:42:18.299261 systemd[1]: Finished flatcar-openstack-hostname.service. Oct 31 05:42:18.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:18.302083 systemd[1]: Finished ignition-mount.service. Oct 31 05:42:18.321225 systemd[1]: Finished sysroot-boot.service. Oct 31 05:42:18.405554 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 31 05:42:18.418177 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (812) Oct 31 05:42:18.422733 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 05:42:18.422786 kernel: BTRFS info (device vda6): using free space tree Oct 31 05:42:18.422845 kernel: BTRFS info (device vda6): has skinny extents Oct 31 05:42:18.430250 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 31 05:42:18.432344 systemd[1]: Starting ignition-files.service... Oct 31 05:42:18.454985 ignition[832]: INFO : Ignition 2.14.0 Oct 31 05:42:18.454985 ignition[832]: INFO : Stage: files Oct 31 05:42:18.456934 ignition[832]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 31 05:42:18.456934 ignition[832]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 31 05:42:18.456934 ignition[832]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 31 05:42:18.462208 ignition[832]: DEBUG : files: compiled without relabeling support, skipping Oct 31 05:42:18.463188 ignition[832]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 31 05:42:18.463188 ignition[832]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 31 05:42:18.466326 ignition[832]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 31 05:42:18.467606 ignition[832]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 31 05:42:18.470050 ignition[832]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 31 05:42:18.468548 unknown[832]: wrote ssh authorized keys file for user: core Oct 31 05:42:18.476407 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 31 05:42:18.477627 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 31 05:42:18.477627 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 31 05:42:18.477627 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Oct 31 05:42:18.702697 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 31 05:42:18.951264 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Oct 31 05:42:18.952734 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 31 05:42:18.952734 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 31 05:42:18.952734 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 31 05:42:18.956036 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 31 05:42:18.956036 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 05:42:18.956036 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 05:42:18.956036 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 05:42:18.956036 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 05:42:18.956036 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 05:42:18.956036 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 05:42:18.956036 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 05:42:18.956036 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 05:42:18.956036 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 05:42:18.956036 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Oct 31 05:42:19.043506 systemd-networkd[711]: eth0: Gained IPv6LL Oct 31 05:42:19.270472 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 31 05:42:20.502964 ignition[832]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Oct 31 05:42:20.505850 ignition[832]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Oct 31 05:42:20.505850 ignition[832]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Oct 31 05:42:20.505850 ignition[832]: INFO : files: op(d): [started] processing unit "containerd.service" Oct 31 05:42:20.508669 ignition[832]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 31 05:42:20.508669 ignition[832]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 31 05:42:20.508669 ignition[832]: INFO : files: op(d): [finished] processing unit "containerd.service" Oct 31 05:42:20.508669 ignition[832]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Oct 31 05:42:20.508669 ignition[832]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 05:42:20.508669 ignition[832]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 05:42:20.508669 ignition[832]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Oct 31 05:42:20.508669 ignition[832]: INFO : files: op(11): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 31 05:42:20.508669 ignition[832]: INFO : files: op(11): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 31 05:42:20.508669 ignition[832]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 31 05:42:20.508669 ignition[832]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 31 05:42:20.534093 kernel: kauditd_printk_skb: 28 callbacks suppressed Oct 31 05:42:20.534149 kernel: audit: type=1130 audit(1761889340.520:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.517729 systemd[1]: Finished ignition-files.service. Oct 31 05:42:20.535153 ignition[832]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 31 05:42:20.535153 ignition[832]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 31 05:42:20.535153 ignition[832]: INFO : files: files passed Oct 31 05:42:20.535153 ignition[832]: INFO : Ignition finished successfully Oct 31 05:42:20.560275 kernel: audit: type=1130 audit(1761889340.540:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.560315 kernel: audit: type=1131 audit(1761889340.540:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.560334 kernel: audit: type=1130 audit(1761889340.551:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.521857 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 31 05:42:20.529770 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 31 05:42:20.562732 initrd-setup-root-after-ignition[857]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 05:42:20.531335 systemd[1]: Starting ignition-quench.service... Oct 31 05:42:20.538479 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 31 05:42:20.538623 systemd[1]: Finished ignition-quench.service. Oct 31 05:42:20.540716 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 31 05:42:20.551938 systemd[1]: Reached target ignition-complete.target. Oct 31 05:42:20.558886 systemd-networkd[711]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:552:24:19ff:fef4:154a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:552:24:19ff:fef4:154a/64 assigned by NDisc. Oct 31 05:42:20.558906 systemd-networkd[711]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Oct 31 05:42:20.560880 systemd[1]: Starting initrd-parse-etc.service... Oct 31 05:42:20.582038 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 31 05:42:20.582245 systemd[1]: Finished initrd-parse-etc.service. Oct 31 05:42:20.593890 kernel: audit: type=1130 audit(1761889340.583:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.593929 kernel: audit: type=1131 audit(1761889340.583:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.583903 systemd[1]: Reached target initrd-fs.target. Oct 31 05:42:20.594551 systemd[1]: Reached target initrd.target. Oct 31 05:42:20.610275 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 31 05:42:20.611616 systemd[1]: Starting dracut-pre-pivot.service... Oct 31 05:42:20.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.628510 systemd[1]: Finished dracut-pre-pivot.service. Oct 31 05:42:20.635313 kernel: audit: type=1130 audit(1761889340.627:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.635379 systemd[1]: Starting initrd-cleanup.service... Oct 31 05:42:20.649970 systemd[1]: Stopped target nss-lookup.target. Oct 31 05:42:20.650807 systemd[1]: Stopped target remote-cryptsetup.target. Oct 31 05:42:20.652201 systemd[1]: Stopped target timers.target. Oct 31 05:42:20.653434 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 31 05:42:20.660114 kernel: audit: type=1131 audit(1761889340.654:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.653673 systemd[1]: Stopped dracut-pre-pivot.service. Oct 31 05:42:20.654888 systemd[1]: Stopped target initrd.target. Oct 31 05:42:20.661034 systemd[1]: Stopped target basic.target. Oct 31 05:42:20.663623 systemd[1]: Stopped target ignition-complete.target. Oct 31 05:42:20.664578 systemd[1]: Stopped target ignition-diskful.target. Oct 31 05:42:20.665874 systemd[1]: Stopped target initrd-root-device.target. Oct 31 05:42:20.667254 systemd[1]: Stopped target remote-fs.target. Oct 31 05:42:20.668581 systemd[1]: Stopped target remote-fs-pre.target. Oct 31 05:42:20.669966 systemd[1]: Stopped target sysinit.target. Oct 31 05:42:20.671396 systemd[1]: Stopped target local-fs.target. Oct 31 05:42:20.672552 systemd[1]: Stopped target local-fs-pre.target. Oct 31 05:42:20.673753 systemd[1]: Stopped target swap.target. Oct 31 05:42:20.674863 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 31 05:42:20.681410 kernel: audit: type=1131 audit(1761889340.676:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.675112 systemd[1]: Stopped dracut-pre-mount.service. Oct 31 05:42:20.676355 systemd[1]: Stopped target cryptsetup.target. Oct 31 05:42:20.688549 kernel: audit: type=1131 audit(1761889340.683:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.682107 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 31 05:42:20.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.682374 systemd[1]: Stopped dracut-initqueue.service. Oct 31 05:42:20.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.683524 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 31 05:42:20.683745 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 31 05:42:20.689468 systemd[1]: ignition-files.service: Deactivated successfully. Oct 31 05:42:20.689681 systemd[1]: Stopped ignition-files.service. Oct 31 05:42:20.692084 systemd[1]: Stopping ignition-mount.service... Oct 31 05:42:20.698577 iscsid[717]: iscsid shutting down. Oct 31 05:42:20.703179 systemd[1]: Stopping iscsid.service... Oct 31 05:42:20.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.704541 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 31 05:42:20.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.704751 systemd[1]: Stopped kmod-static-nodes.service. Oct 31 05:42:20.706812 systemd[1]: Stopping sysroot-boot.service... Oct 31 05:42:20.707506 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 31 05:42:20.707678 systemd[1]: Stopped systemd-udev-trigger.service. Oct 31 05:42:20.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.708480 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 31 05:42:20.708627 systemd[1]: Stopped dracut-pre-trigger.service. Oct 31 05:42:20.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.711437 systemd[1]: iscsid.service: Deactivated successfully. Oct 31 05:42:20.713213 systemd[1]: Stopped iscsid.service. Oct 31 05:42:20.715587 systemd[1]: Stopping iscsiuio.service... Oct 31 05:42:20.717831 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 31 05:42:20.718990 systemd[1]: Finished initrd-cleanup.service. Oct 31 05:42:20.719949 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 31 05:42:20.721010 systemd[1]: Stopped iscsiuio.service. Oct 31 05:42:20.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.739476 ignition[870]: INFO : Ignition 2.14.0 Oct 31 05:42:20.739476 ignition[870]: INFO : Stage: umount Oct 31 05:42:20.739476 ignition[870]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 31 05:42:20.739476 ignition[870]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Oct 31 05:42:20.739476 ignition[870]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 31 05:42:20.739476 ignition[870]: INFO : umount: umount passed Oct 31 05:42:20.739476 ignition[870]: INFO : Ignition finished successfully Oct 31 05:42:20.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.735052 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 31 05:42:20.736921 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 31 05:42:20.737068 systemd[1]: Stopped ignition-mount.service. Oct 31 05:42:20.737912 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 31 05:42:20.737981 systemd[1]: Stopped ignition-disks.service. Oct 31 05:42:20.738682 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 31 05:42:20.738742 systemd[1]: Stopped ignition-kargs.service. Oct 31 05:42:20.740334 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 31 05:42:20.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.740395 systemd[1]: Stopped ignition-fetch.service. Oct 31 05:42:20.742711 systemd[1]: Stopped target network.target. Oct 31 05:42:20.743315 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 31 05:42:20.743380 systemd[1]: Stopped ignition-fetch-offline.service. Oct 31 05:42:20.744044 systemd[1]: Stopped target paths.target. Oct 31 05:42:20.746035 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 31 05:42:20.749177 systemd[1]: Stopped systemd-ask-password-console.path. Oct 31 05:42:20.750541 systemd[1]: Stopped target slices.target. Oct 31 05:42:20.751858 systemd[1]: Stopped target sockets.target. Oct 31 05:42:20.753276 systemd[1]: iscsid.socket: Deactivated successfully. Oct 31 05:42:20.753338 systemd[1]: Closed iscsid.socket. Oct 31 05:42:20.754460 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 31 05:42:20.754521 systemd[1]: Closed iscsiuio.socket. Oct 31 05:42:20.755871 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 31 05:42:20.755943 systemd[1]: Stopped ignition-setup.service. Oct 31 05:42:20.757819 systemd[1]: Stopping systemd-networkd.service... Oct 31 05:42:20.758987 systemd[1]: Stopping systemd-resolved.service... Oct 31 05:42:20.762197 systemd-networkd[711]: eth0: DHCPv6 lease lost Oct 31 05:42:20.763528 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 31 05:42:20.763700 systemd[1]: Stopped systemd-networkd.service. Oct 31 05:42:20.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.777979 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 31 05:42:20.778213 systemd[1]: Stopped systemd-resolved.service. Oct 31 05:42:20.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.780000 audit: BPF prog-id=9 op=UNLOAD Oct 31 05:42:20.780738 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 31 05:42:20.781000 audit: BPF prog-id=6 op=UNLOAD Oct 31 05:42:20.780808 systemd[1]: Closed systemd-networkd.socket. Oct 31 05:42:20.783293 systemd[1]: Stopping network-cleanup.service... Oct 31 05:42:20.785714 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 31 05:42:20.785796 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 31 05:42:20.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.787466 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 05:42:20.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.787534 systemd[1]: Stopped systemd-sysctl.service. Oct 31 05:42:20.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.789291 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 31 05:42:20.789359 systemd[1]: Stopped systemd-modules-load.service. Oct 31 05:42:20.790352 systemd[1]: Stopping systemd-udevd.service... Oct 31 05:42:20.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.798542 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 31 05:42:20.799444 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 31 05:42:20.799680 systemd[1]: Stopped systemd-udevd.service. Oct 31 05:42:20.818399 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 31 05:42:20.818483 systemd[1]: Closed systemd-udevd-control.socket. Oct 31 05:42:20.821994 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 31 05:42:20.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.822049 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 31 05:42:20.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.823638 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 31 05:42:20.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.823707 systemd[1]: Stopped dracut-pre-udev.service. Oct 31 05:42:20.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.825055 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 31 05:42:20.825148 systemd[1]: Stopped dracut-cmdline.service. Oct 31 05:42:20.826511 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 05:42:20.826576 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 31 05:42:20.828861 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 31 05:42:20.829580 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 05:42:20.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.829651 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 31 05:42:20.830852 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 31 05:42:20.831000 systemd[1]: Stopped network-cleanup.service. Oct 31 05:42:20.841230 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 31 05:42:20.841390 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 31 05:42:20.871850 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 31 05:42:20.872018 systemd[1]: Stopped sysroot-boot.service. Oct 31 05:42:20.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.873763 systemd[1]: Reached target initrd-switch-root.target. Oct 31 05:42:20.874896 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 31 05:42:20.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:20.874970 systemd[1]: Stopped initrd-setup-root.service. Oct 31 05:42:20.877386 systemd[1]: Starting initrd-switch-root.service... Oct 31 05:42:20.889977 systemd[1]: Switching root. Oct 31 05:42:20.890000 audit: BPF prog-id=5 op=UNLOAD Oct 31 05:42:20.890000 audit: BPF prog-id=4 op=UNLOAD Oct 31 05:42:20.890000 audit: BPF prog-id=3 op=UNLOAD Oct 31 05:42:20.895000 audit: BPF prog-id=8 op=UNLOAD Oct 31 05:42:20.895000 audit: BPF prog-id=7 op=UNLOAD Oct 31 05:42:20.915618 systemd-journald[201]: Journal stopped Oct 31 05:42:25.096985 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Oct 31 05:42:25.097183 kernel: SELinux: Class mctp_socket not defined in policy. Oct 31 05:42:25.097258 kernel: SELinux: Class anon_inode not defined in policy. Oct 31 05:42:25.097287 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 31 05:42:25.097316 kernel: SELinux: policy capability network_peer_controls=1 Oct 31 05:42:25.097344 kernel: SELinux: policy capability open_perms=1 Oct 31 05:42:25.097381 kernel: SELinux: policy capability extended_socket_class=1 Oct 31 05:42:25.097410 kernel: SELinux: policy capability always_check_network=0 Oct 31 05:42:25.097448 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 31 05:42:25.097469 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 31 05:42:25.097495 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 31 05:42:25.097521 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 31 05:42:25.097550 systemd[1]: Successfully loaded SELinux policy in 60.303ms. Oct 31 05:42:25.097609 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.114ms. Oct 31 05:42:25.097635 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 31 05:42:25.097658 systemd[1]: Detected virtualization kvm. Oct 31 05:42:25.097690 systemd[1]: Detected architecture x86-64. Oct 31 05:42:25.097712 systemd[1]: Detected first boot. Oct 31 05:42:25.097739 systemd[1]: Hostname set to . Oct 31 05:42:25.097767 systemd[1]: Initializing machine ID from VM UUID. Oct 31 05:42:25.097800 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Oct 31 05:42:25.097833 systemd[1]: Populated /etc with preset unit settings. Oct 31 05:42:25.097862 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 05:42:25.097905 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 05:42:25.097931 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 05:42:25.097969 systemd[1]: Queued start job for default target multi-user.target. Oct 31 05:42:25.097992 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 31 05:42:25.098013 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 31 05:42:25.098040 systemd[1]: Created slice system-addon\x2drun.slice. Oct 31 05:42:25.098072 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Oct 31 05:42:25.098105 systemd[1]: Created slice system-getty.slice. Oct 31 05:42:25.099282 systemd[1]: Created slice system-modprobe.slice. Oct 31 05:42:25.099325 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 31 05:42:25.099356 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 31 05:42:25.099387 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 31 05:42:25.099410 systemd[1]: Created slice user.slice. Oct 31 05:42:25.099438 systemd[1]: Started systemd-ask-password-console.path. Oct 31 05:42:25.099460 systemd[1]: Started systemd-ask-password-wall.path. Oct 31 05:42:25.099481 systemd[1]: Set up automount boot.automount. Oct 31 05:42:25.099515 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 31 05:42:25.099597 systemd[1]: Reached target integritysetup.target. Oct 31 05:42:25.099621 systemd[1]: Reached target remote-cryptsetup.target. Oct 31 05:42:25.099643 systemd[1]: Reached target remote-fs.target. Oct 31 05:42:25.099664 systemd[1]: Reached target slices.target. Oct 31 05:42:25.099685 systemd[1]: Reached target swap.target. Oct 31 05:42:25.099706 systemd[1]: Reached target torcx.target. Oct 31 05:42:25.099749 systemd[1]: Reached target veritysetup.target. Oct 31 05:42:25.099779 systemd[1]: Listening on systemd-coredump.socket. Oct 31 05:42:25.099808 systemd[1]: Listening on systemd-initctl.socket. Oct 31 05:42:25.099831 systemd[1]: Listening on systemd-journald-audit.socket. Oct 31 05:42:25.099858 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 31 05:42:25.099891 systemd[1]: Listening on systemd-journald.socket. Oct 31 05:42:25.099915 systemd[1]: Listening on systemd-networkd.socket. Oct 31 05:42:25.099946 systemd[1]: Listening on systemd-udevd-control.socket. Oct 31 05:42:25.099970 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 31 05:42:25.099992 systemd[1]: Listening on systemd-userdbd.socket. Oct 31 05:42:25.100026 systemd[1]: Mounting dev-hugepages.mount... Oct 31 05:42:25.100049 systemd[1]: Mounting dev-mqueue.mount... Oct 31 05:42:25.100070 systemd[1]: Mounting media.mount... Oct 31 05:42:25.100109 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 05:42:25.100173 systemd[1]: Mounting sys-kernel-debug.mount... Oct 31 05:42:25.100227 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 31 05:42:25.100251 systemd[1]: Mounting tmp.mount... Oct 31 05:42:25.100280 systemd[1]: Starting flatcar-tmpfiles.service... Oct 31 05:42:25.100302 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 05:42:25.100336 systemd[1]: Starting kmod-static-nodes.service... Oct 31 05:42:25.100359 systemd[1]: Starting modprobe@configfs.service... Oct 31 05:42:25.100389 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 05:42:25.100428 systemd[1]: Starting modprobe@drm.service... Oct 31 05:42:25.100450 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 05:42:25.100471 systemd[1]: Starting modprobe@fuse.service... Oct 31 05:42:25.100491 systemd[1]: Starting modprobe@loop.service... Oct 31 05:42:25.100519 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 31 05:42:25.100554 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 31 05:42:25.100588 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Oct 31 05:42:25.100611 systemd[1]: Starting systemd-journald.service... Oct 31 05:42:25.100633 kernel: fuse: init (API version 7.34) Oct 31 05:42:25.100664 systemd[1]: Starting systemd-modules-load.service... Oct 31 05:42:25.100687 systemd[1]: Starting systemd-network-generator.service... Oct 31 05:42:25.100713 kernel: loop: module loaded Oct 31 05:42:25.100740 systemd[1]: Starting systemd-remount-fs.service... Oct 31 05:42:25.100768 systemd[1]: Starting systemd-udev-trigger.service... Oct 31 05:42:25.100796 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 05:42:25.100829 systemd[1]: Mounted dev-hugepages.mount. Oct 31 05:42:25.100852 systemd[1]: Mounted dev-mqueue.mount. Oct 31 05:42:25.100879 systemd[1]: Mounted media.mount. Oct 31 05:42:25.100901 systemd[1]: Mounted sys-kernel-debug.mount. Oct 31 05:42:25.100922 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 31 05:42:25.100942 systemd[1]: Mounted tmp.mount. Oct 31 05:42:25.100978 systemd[1]: Finished kmod-static-nodes.service. Oct 31 05:42:25.101000 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 31 05:42:25.101029 systemd[1]: Finished modprobe@configfs.service. Oct 31 05:42:25.101063 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 05:42:25.101092 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 05:42:25.101178 systemd[1]: Finished flatcar-tmpfiles.service. Oct 31 05:42:25.101217 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 05:42:25.101241 systemd[1]: Finished modprobe@drm.service. Oct 31 05:42:25.101262 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 05:42:25.101283 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 05:42:25.101304 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 31 05:42:25.101324 systemd[1]: Finished modprobe@fuse.service. Oct 31 05:42:25.101360 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 05:42:25.101382 systemd[1]: Finished modprobe@loop.service. Oct 31 05:42:25.101403 systemd[1]: Finished systemd-modules-load.service. Oct 31 05:42:25.101430 systemd[1]: Finished systemd-network-generator.service. Oct 31 05:42:25.101459 systemd-journald[1015]: Journal started Oct 31 05:42:25.101579 systemd-journald[1015]: Runtime Journal (/run/log/journal/a23de79e896e47598bec33a5bfb3cda8) is 4.7M, max 38.1M, 33.3M free. Oct 31 05:42:24.820000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 31 05:42:24.820000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Oct 31 05:42:25.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.081000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 31 05:42:25.081000 audit[1015]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff39ce71d0 a2=4000 a3=7fff39ce726c items=0 ppid=1 pid=1015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:25.081000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 31 05:42:25.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.104169 systemd[1]: Finished systemd-remount-fs.service. Oct 31 05:42:25.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.107183 systemd[1]: Started systemd-journald.service. Oct 31 05:42:25.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.109468 systemd[1]: Reached target network-pre.target. Oct 31 05:42:25.112005 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 31 05:42:25.114640 systemd[1]: Mounting sys-kernel-config.mount... Oct 31 05:42:25.115363 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 31 05:42:25.121257 systemd[1]: Starting systemd-hwdb-update.service... Oct 31 05:42:25.129793 systemd[1]: Starting systemd-journal-flush.service... Oct 31 05:42:25.130668 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 05:42:25.134272 systemd[1]: Starting systemd-random-seed.service... Oct 31 05:42:25.135065 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 05:42:25.137084 systemd[1]: Starting systemd-sysctl.service... Oct 31 05:42:25.140881 systemd[1]: Starting systemd-sysusers.service... Oct 31 05:42:25.151896 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 31 05:42:25.152747 systemd[1]: Mounted sys-kernel-config.mount. Oct 31 05:42:25.168015 systemd-journald[1015]: Time spent on flushing to /var/log/journal/a23de79e896e47598bec33a5bfb3cda8 is 79.261ms for 1233 entries. Oct 31 05:42:25.168015 systemd-journald[1015]: System Journal (/var/log/journal/a23de79e896e47598bec33a5bfb3cda8) is 8.0M, max 584.8M, 576.8M free. Oct 31 05:42:25.270523 systemd-journald[1015]: Received client request to flush runtime journal. Oct 31 05:42:25.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.177341 systemd[1]: Finished systemd-random-seed.service. Oct 31 05:42:25.178262 systemd[1]: Reached target first-boot-complete.target. Oct 31 05:42:25.189098 systemd[1]: Finished systemd-sysctl.service. Oct 31 05:42:25.203561 systemd[1]: Finished systemd-sysusers.service. Oct 31 05:42:25.206526 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 31 05:42:25.265448 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 31 05:42:25.271775 systemd[1]: Finished systemd-journal-flush.service. Oct 31 05:42:25.298744 systemd[1]: Finished systemd-udev-trigger.service. Oct 31 05:42:25.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.301513 systemd[1]: Starting systemd-udev-settle.service... Oct 31 05:42:25.313243 udevadm[1066]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 31 05:42:25.837720 systemd[1]: Finished systemd-hwdb-update.service. Oct 31 05:42:25.845399 kernel: kauditd_printk_skb: 77 callbacks suppressed Oct 31 05:42:25.845529 kernel: audit: type=1130 audit(1761889345.838:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.840448 systemd[1]: Starting systemd-udevd.service... Oct 31 05:42:25.872382 systemd-udevd[1068]: Using default interface naming scheme 'v252'. Oct 31 05:42:25.907344 systemd[1]: Started systemd-udevd.service. Oct 31 05:42:25.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.910907 systemd[1]: Starting systemd-networkd.service... Oct 31 05:42:25.914173 kernel: audit: type=1130 audit(1761889345.907:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.924325 systemd[1]: Starting systemd-userdbd.service... Oct 31 05:42:25.988522 systemd[1]: Started systemd-userdbd.service. Oct 31 05:42:25.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:25.995156 kernel: audit: type=1130 audit(1761889345.989:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:26.055782 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 31 05:42:26.068787 systemd[1]: Found device dev-ttyS0.device. Oct 31 05:42:26.150705 systemd-networkd[1069]: lo: Link UP Oct 31 05:42:26.150719 systemd-networkd[1069]: lo: Gained carrier Oct 31 05:42:26.151784 systemd-networkd[1069]: Enumeration completed Oct 31 05:42:26.152009 systemd[1]: Started systemd-networkd.service. Oct 31 05:42:26.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:26.153424 systemd-networkd[1069]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 05:42:26.160802 kernel: audit: type=1130 audit(1761889346.152:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:26.162991 systemd-networkd[1069]: eth0: Link UP Oct 31 05:42:26.163003 systemd-networkd[1069]: eth0: Gained carrier Oct 31 05:42:26.172196 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 31 05:42:26.179154 kernel: ACPI: button: Power Button [PWRF] Oct 31 05:42:26.182393 systemd-networkd[1069]: eth0: DHCPv4 address 10.244.21.74/30, gateway 10.244.21.73 acquired from 10.244.21.73 Oct 31 05:42:26.214182 kernel: mousedev: PS/2 mouse device common for all mice Oct 31 05:42:26.228000 audit[1081]: AVC avc: denied { confidentiality } for pid=1081 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 31 05:42:26.238154 kernel: audit: type=1400 audit(1761889346.228:121): avc: denied { confidentiality } for pid=1081 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 31 05:42:26.228000 audit[1081]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=563d38102ea0 a1=338ec a2=7fcbec2bebc5 a3=5 items=110 ppid=1068 pid=1081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:26.256112 kernel: audit: type=1300 audit(1761889346.228:121): arch=c000003e syscall=175 success=yes exit=0 a0=563d38102ea0 a1=338ec a2=7fcbec2bebc5 a3=5 items=110 ppid=1068 pid=1081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:26.256283 kernel: audit: type=1307 audit(1761889346.228:121): cwd="/" Oct 31 05:42:26.228000 audit: CWD cwd="/" Oct 31 05:42:26.228000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.262886 kernel: audit: type=1302 audit(1761889346.228:121): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=1 name=(null) inode=14102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.271145 kernel: audit: type=1302 audit(1761889346.228:121): item=1 name=(null) inode=14102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=2 name=(null) inode=14102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.278229 kernel: audit: type=1302 audit(1761889346.228:121): item=2 name=(null) inode=14102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=3 name=(null) inode=14103 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=4 name=(null) inode=14102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=5 name=(null) inode=14104 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=6 name=(null) inode=14102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=7 name=(null) inode=14105 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=8 name=(null) inode=14105 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=9 name=(null) inode=14106 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=10 name=(null) inode=14105 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=11 name=(null) inode=14107 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=12 name=(null) inode=14105 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=13 name=(null) inode=14108 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=14 name=(null) inode=14105 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=15 name=(null) inode=14109 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=16 name=(null) inode=14105 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=17 name=(null) inode=14110 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=18 name=(null) inode=14102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=19 name=(null) inode=14111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=20 name=(null) inode=14111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=21 name=(null) inode=14112 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=22 name=(null) inode=14111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=23 name=(null) inode=14113 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=24 name=(null) inode=14111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=25 name=(null) inode=14114 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=26 name=(null) inode=14111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=27 name=(null) inode=14115 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=28 name=(null) inode=14111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=29 name=(null) inode=14116 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=30 name=(null) inode=14102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=31 name=(null) inode=14117 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=32 name=(null) inode=14117 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=33 name=(null) inode=14118 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=34 name=(null) inode=14117 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=35 name=(null) inode=14119 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=36 name=(null) inode=14117 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=37 name=(null) inode=14120 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=38 name=(null) inode=14117 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=39 name=(null) inode=14121 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=40 name=(null) inode=14117 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=41 name=(null) inode=14122 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=42 name=(null) inode=14102 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=43 name=(null) inode=14123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=44 name=(null) inode=14123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=45 name=(null) inode=14124 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=46 name=(null) inode=14123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=47 name=(null) inode=14125 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=48 name=(null) inode=14123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=49 name=(null) inode=14126 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=50 name=(null) inode=14123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=51 name=(null) inode=14127 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=52 name=(null) inode=14123 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=53 name=(null) inode=14128 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=55 name=(null) inode=14129 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=56 name=(null) inode=14129 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=57 name=(null) inode=14130 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=58 name=(null) inode=14129 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=59 name=(null) inode=14131 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=60 name=(null) inode=14129 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=61 name=(null) inode=14132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=62 name=(null) inode=14132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=63 name=(null) inode=14133 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=64 name=(null) inode=14132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=65 name=(null) inode=14134 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=66 name=(null) inode=14132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=67 name=(null) inode=14135 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=68 name=(null) inode=14132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=69 name=(null) inode=14136 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=70 name=(null) inode=14132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=71 name=(null) inode=14137 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=72 name=(null) inode=14129 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=73 name=(null) inode=14138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=74 name=(null) inode=14138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=75 name=(null) inode=14139 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=76 name=(null) inode=14138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=77 name=(null) inode=14140 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=78 name=(null) inode=14138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=79 name=(null) inode=14141 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=80 name=(null) inode=14138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=81 name=(null) inode=14142 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=82 name=(null) inode=14138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=83 name=(null) inode=14143 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=84 name=(null) inode=14129 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=85 name=(null) inode=14144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=86 name=(null) inode=14144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=87 name=(null) inode=14145 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=88 name=(null) inode=14144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=89 name=(null) inode=14146 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=90 name=(null) inode=14144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=91 name=(null) inode=14147 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=92 name=(null) inode=14144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=93 name=(null) inode=14148 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=94 name=(null) inode=14144 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=95 name=(null) inode=14149 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=96 name=(null) inode=14129 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=97 name=(null) inode=14150 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=98 name=(null) inode=14150 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=99 name=(null) inode=14151 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=100 name=(null) inode=14150 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=101 name=(null) inode=14152 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=102 name=(null) inode=14150 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=103 name=(null) inode=14153 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=104 name=(null) inode=14150 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=105 name=(null) inode=14154 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=106 name=(null) inode=14150 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=107 name=(null) inode=14155 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PATH item=109 name=(null) inode=14156 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:42:26.228000 audit: PROCTITLE proctitle="(udev-worker)" Oct 31 05:42:26.312379 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 31 05:42:26.319147 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 31 05:42:26.334479 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 31 05:42:26.334760 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 31 05:42:26.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:26.476012 systemd[1]: Finished systemd-udev-settle.service. Oct 31 05:42:26.478980 systemd[1]: Starting lvm2-activation-early.service... Oct 31 05:42:26.503639 lvm[1098]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 05:42:26.536845 systemd[1]: Finished lvm2-activation-early.service. Oct 31 05:42:26.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:26.537747 systemd[1]: Reached target cryptsetup.target. Oct 31 05:42:26.540545 systemd[1]: Starting lvm2-activation.service... Oct 31 05:42:26.547864 lvm[1100]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 05:42:26.576930 systemd[1]: Finished lvm2-activation.service. Oct 31 05:42:26.577883 systemd[1]: Reached target local-fs-pre.target. Oct 31 05:42:26.579090 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 31 05:42:26.579152 systemd[1]: Reached target local-fs.target. Oct 31 05:42:26.579781 systemd[1]: Reached target machines.target. Oct 31 05:42:26.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:26.582565 systemd[1]: Starting ldconfig.service... Oct 31 05:42:26.584158 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 05:42:26.584450 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 05:42:26.586996 systemd[1]: Starting systemd-boot-update.service... Oct 31 05:42:26.590856 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 31 05:42:26.593692 systemd[1]: Starting systemd-machine-id-commit.service... Oct 31 05:42:26.597663 systemd[1]: Starting systemd-sysext.service... Oct 31 05:42:26.612636 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1103 (bootctl) Oct 31 05:42:26.614650 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 31 05:42:26.669035 systemd[1]: Unmounting usr-share-oem.mount... Oct 31 05:42:26.674729 systemd[1]: usr-share-oem.mount: Deactivated successfully. Oct 31 05:42:26.675159 systemd[1]: Unmounted usr-share-oem.mount. Oct 31 05:42:26.700189 kernel: loop0: detected capacity change from 0 to 224512 Oct 31 05:42:26.752883 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 31 05:42:26.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:26.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:26.780548 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 31 05:42:26.781493 systemd[1]: Finished systemd-machine-id-commit.service. Oct 31 05:42:26.793157 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 31 05:42:26.821207 kernel: loop1: detected capacity change from 0 to 224512 Oct 31 05:42:26.854485 (sd-sysext)[1120]: Using extensions 'kubernetes'. Oct 31 05:42:26.855295 (sd-sysext)[1120]: Merged extensions into '/usr'. Oct 31 05:42:26.891917 systemd-fsck[1116]: fsck.fat 4.2 (2021-01-31) Oct 31 05:42:26.891917 systemd-fsck[1116]: /dev/vda1: 790 files, 120772/258078 clusters Oct 31 05:42:26.894723 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 05:42:26.898761 systemd[1]: Mounting usr-share-oem.mount... Oct 31 05:42:26.899785 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 05:42:26.901647 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 05:42:26.909252 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 05:42:26.914426 systemd[1]: Starting modprobe@loop.service... Oct 31 05:42:26.916689 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 05:42:26.917328 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 05:42:26.917989 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 05:42:26.929806 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 31 05:42:26.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:26.931039 systemd[1]: Mounted usr-share-oem.mount. Oct 31 05:42:26.932559 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 05:42:26.932797 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 05:42:26.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:26.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:26.937312 systemd[1]: Finished systemd-sysext.service. Oct 31 05:42:26.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:26.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:26.938902 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 05:42:26.939146 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 05:42:26.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:26.941909 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 05:42:26.944413 systemd[1]: Finished modprobe@loop.service. Oct 31 05:42:26.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:26.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:26.958756 systemd[1]: Mounting boot.mount... Oct 31 05:42:26.962006 systemd[1]: Starting ensure-sysext.service... Oct 31 05:42:26.962798 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 05:42:26.962960 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 05:42:26.965278 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 31 05:42:26.979026 systemd[1]: Reloading. Oct 31 05:42:26.987961 systemd-tmpfiles[1138]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 31 05:42:26.991806 systemd-tmpfiles[1138]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 31 05:42:26.997395 systemd-tmpfiles[1138]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 31 05:42:27.106723 /usr/lib/systemd/system-generators/torcx-generator[1159]: time="2025-10-31T05:42:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 31 05:42:27.106772 /usr/lib/systemd/system-generators/torcx-generator[1159]: time="2025-10-31T05:42:27Z" level=info msg="torcx already run" Oct 31 05:42:27.262150 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 05:42:27.262194 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 05:42:27.297438 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 05:42:27.350503 ldconfig[1102]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 31 05:42:27.413759 systemd[1]: Finished ldconfig.service. Oct 31 05:42:27.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.427395 systemd[1]: Mounted boot.mount. Oct 31 05:42:27.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.449720 systemd[1]: Finished systemd-boot-update.service. Oct 31 05:42:27.460417 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 05:42:27.463220 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 05:42:27.466050 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 05:42:27.469100 systemd[1]: Starting modprobe@loop.service... Oct 31 05:42:27.493597 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 05:42:27.493858 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 05:42:27.495822 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 05:42:27.496087 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 05:42:27.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.498035 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 05:42:27.498311 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 05:42:27.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.500043 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 05:42:27.500493 systemd[1]: Finished modprobe@loop.service. Oct 31 05:42:27.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.502301 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 05:42:27.502626 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 05:42:27.505434 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 05:42:27.507795 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 05:42:27.510578 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 05:42:27.513341 systemd[1]: Starting modprobe@loop.service... Oct 31 05:42:27.514551 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 05:42:27.515926 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 05:42:27.520695 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 05:42:27.520957 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 05:42:27.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.522829 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 05:42:27.523061 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 05:42:27.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.525664 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 05:42:27.526065 systemd[1]: Finished modprobe@loop.service. Oct 31 05:42:27.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.529760 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 05:42:27.529919 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 05:42:27.541800 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 31 05:42:27.545269 systemd[1]: Starting modprobe@dm_mod.service... Oct 31 05:42:27.548010 systemd[1]: Starting modprobe@drm.service... Oct 31 05:42:27.550739 systemd[1]: Starting modprobe@efi_pstore.service... Oct 31 05:42:27.553718 systemd[1]: Starting modprobe@loop.service... Oct 31 05:42:27.555023 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 31 05:42:27.558096 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 05:42:27.563193 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 31 05:42:27.569746 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 05:42:27.570036 systemd[1]: Finished modprobe@dm_mod.service. Oct 31 05:42:27.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.572781 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 05:42:27.573018 systemd[1]: Finished modprobe@drm.service. Oct 31 05:42:27.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.576836 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 05:42:27.577092 systemd[1]: Finished modprobe@efi_pstore.service. Oct 31 05:42:27.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.579390 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 05:42:27.579690 systemd[1]: Finished modprobe@loop.service. Oct 31 05:42:27.581366 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 05:42:27.581527 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 31 05:42:27.586748 systemd[1]: Finished ensure-sysext.service. Oct 31 05:42:27.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.664372 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 31 05:42:27.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.667761 systemd[1]: Starting audit-rules.service... Oct 31 05:42:27.670444 systemd[1]: Starting clean-ca-certificates.service... Oct 31 05:42:27.673231 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 31 05:42:27.683512 systemd[1]: Starting systemd-resolved.service... Oct 31 05:42:27.707607 systemd[1]: Starting systemd-timesyncd.service... Oct 31 05:42:27.712803 systemd[1]: Starting systemd-update-utmp.service... Oct 31 05:42:27.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.717381 systemd[1]: Finished clean-ca-certificates.service. Oct 31 05:42:27.719667 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 05:42:27.719713 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 05:42:27.728000 audit[1252]: SYSTEM_BOOT pid=1252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.719745 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 05:42:27.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.733962 systemd[1]: Finished systemd-update-utmp.service. Oct 31 05:42:27.747677 systemd-networkd[1069]: eth0: Gained IPv6LL Oct 31 05:42:27.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.754659 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 31 05:42:27.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.757622 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 31 05:42:27.760916 systemd[1]: Starting systemd-update-done.service... Oct 31 05:42:27.782317 systemd[1]: Finished systemd-update-done.service. Oct 31 05:42:27.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:27.816000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 31 05:42:27.816000 audit[1266]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcb56e65c0 a2=420 a3=0 items=0 ppid=1240 pid=1266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:27.816000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 31 05:42:27.817058 augenrules[1266]: No rules Oct 31 05:42:27.818352 systemd[1]: Finished audit-rules.service. Oct 31 05:42:27.850600 systemd-resolved[1244]: Positive Trust Anchors: Oct 31 05:42:27.850622 systemd-resolved[1244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 05:42:27.850664 systemd-resolved[1244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 31 05:42:27.859765 systemd-resolved[1244]: Using system hostname 'srv-f2mor.gb1.brightbox.com'. Oct 31 05:42:27.862822 systemd[1]: Started systemd-resolved.service. Oct 31 05:42:27.863778 systemd[1]: Started systemd-timesyncd.service. Oct 31 05:42:27.864541 systemd[1]: Reached target network.target. Oct 31 05:42:27.865191 systemd[1]: Reached target network-online.target. Oct 31 05:42:27.865838 systemd[1]: Reached target nss-lookup.target. Oct 31 05:42:27.866506 systemd[1]: Reached target sysinit.target. Oct 31 05:42:27.867251 systemd[1]: Started motdgen.path. Oct 31 05:42:27.867880 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 31 05:42:27.869235 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 31 05:42:27.870281 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 31 05:42:27.870339 systemd[1]: Reached target paths.target. Oct 31 05:42:27.870930 systemd[1]: Reached target time-set.target. Oct 31 05:42:27.871817 systemd[1]: Started logrotate.timer. Oct 31 05:42:27.872559 systemd[1]: Started mdadm.timer. Oct 31 05:42:27.873265 systemd[1]: Reached target timers.target. Oct 31 05:42:27.874454 systemd[1]: Listening on dbus.socket. Oct 31 05:42:27.877581 systemd[1]: Starting docker.socket... Oct 31 05:42:27.882253 systemd[1]: Listening on sshd.socket. Oct 31 05:42:27.883049 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 05:42:27.883777 systemd[1]: Listening on docker.socket. Oct 31 05:42:27.884571 systemd[1]: Reached target sockets.target. Oct 31 05:42:27.885255 systemd[1]: Reached target basic.target. Oct 31 05:42:27.886163 systemd[1]: System is tainted: cgroupsv1 Oct 31 05:42:27.886265 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 31 05:42:27.886307 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 31 05:42:27.888631 systemd[1]: Starting containerd.service... Oct 31 05:42:27.890928 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Oct 31 05:42:27.893631 systemd[1]: Starting dbus.service... Oct 31 05:42:27.896629 systemd[1]: Starting enable-oem-cloudinit.service... Oct 31 05:42:27.903017 systemd[1]: Starting extend-filesystems.service... Oct 31 05:42:27.906324 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 31 05:42:27.911765 systemd[1]: Starting kubelet.service... Oct 31 05:42:27.915794 systemd[1]: Starting motdgen.service... Oct 31 05:42:27.922483 jq[1279]: false Oct 31 05:42:27.923348 systemd[1]: Starting prepare-helm.service... Oct 31 05:42:27.926647 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 31 05:42:27.936818 systemd[1]: Starting sshd-keygen.service... Oct 31 05:42:27.954459 systemd[1]: Starting systemd-logind.service... Oct 31 05:42:27.956345 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 31 05:42:27.956490 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 31 05:42:27.959077 systemd[1]: Starting update-engine.service... Oct 31 05:42:27.970795 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 31 05:42:27.978552 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 31 05:42:27.979008 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 31 05:42:27.996370 dbus-daemon[1278]: [system] SELinux support is enabled Oct 31 05:42:28.000073 systemd[1]: Started dbus.service. Oct 31 05:42:28.031461 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 31 05:42:28.051442 jq[1298]: true Oct 31 05:42:28.031546 systemd[1]: Reached target system-config.target. Oct 31 05:42:28.032463 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 31 05:42:28.032504 systemd[1]: Reached target user-config.target. Oct 31 05:42:28.040066 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 31 05:42:28.040634 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 31 05:42:28.078070 tar[1305]: linux-amd64/LICENSE Oct 31 05:42:28.078070 tar[1305]: linux-amd64/helm Oct 31 05:42:28.091335 extend-filesystems[1280]: Found loop1 Oct 31 05:42:28.091335 extend-filesystems[1280]: Found vda Oct 31 05:42:28.091335 extend-filesystems[1280]: Found vda1 Oct 31 05:42:28.091335 extend-filesystems[1280]: Found vda2 Oct 31 05:42:28.091335 extend-filesystems[1280]: Found vda3 Oct 31 05:42:28.091335 extend-filesystems[1280]: Found usr Oct 31 05:42:28.091335 extend-filesystems[1280]: Found vda4 Oct 31 05:42:28.091335 extend-filesystems[1280]: Found vda6 Oct 31 05:42:28.091335 extend-filesystems[1280]: Found vda7 Oct 31 05:42:28.091335 extend-filesystems[1280]: Found vda9 Oct 31 05:42:28.091335 extend-filesystems[1280]: Checking size of /dev/vda9 Oct 31 05:42:28.185992 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Oct 31 05:42:28.046187 systemd[1]: Created slice system-sshd.slice. Oct 31 05:42:28.186448 jq[1315]: true Oct 31 05:42:28.186830 extend-filesystems[1280]: Resized partition /dev/vda9 Oct 31 05:42:28.096429 dbus-daemon[1278]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1069 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 31 05:42:28.095300 systemd[1]: motdgen.service: Deactivated successfully. Oct 31 05:42:28.188614 extend-filesystems[1332]: resize2fs 1.46.5 (30-Dec-2021) Oct 31 05:42:28.110957 dbus-daemon[1278]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 31 05:42:28.095698 systemd[1]: Finished motdgen.service. Oct 31 05:42:28.116589 systemd[1]: Starting systemd-hostnamed.service... Oct 31 05:42:28.264834 systemd-logind[1296]: Watching system buttons on /dev/input/event2 (Power Button) Oct 31 05:42:28.264884 systemd-logind[1296]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 31 05:42:28.267665 systemd-logind[1296]: New seat seat0. Oct 31 05:42:28.273853 systemd[1]: Started systemd-logind.service. Oct 31 05:42:28.304167 env[1308]: time="2025-10-31T05:42:28.303994071Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 31 05:42:28.312531 update_engine[1297]: I1031 05:42:28.311483 1297 main.cc:92] Flatcar Update Engine starting Oct 31 05:42:28.391755 update_engine[1297]: I1031 05:42:28.317708 1297 update_check_scheduler.cc:74] Next update check in 6m39s Oct 31 05:42:28.373666 dbus-daemon[1278]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 31 05:42:28.318843 systemd[1]: Started update-engine.service. Oct 31 05:42:28.374544 dbus-daemon[1278]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1329 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 31 05:42:28.326379 systemd[1]: Started locksmithd.service. Oct 31 05:42:28.373941 systemd[1]: Started systemd-hostnamed.service. Oct 31 05:42:28.388744 systemd[1]: Starting polkit.service... Oct 31 05:42:28.417815 env[1308]: time="2025-10-31T05:42:28.417719480Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 31 05:42:28.426513 env[1308]: time="2025-10-31T05:42:28.426425257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 31 05:42:28.429466 env[1308]: time="2025-10-31T05:42:28.429342544Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 31 05:42:28.429466 env[1308]: time="2025-10-31T05:42:28.429390289Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 31 05:42:28.451398 env[1308]: time="2025-10-31T05:42:28.449628087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 05:42:28.451398 env[1308]: time="2025-10-31T05:42:28.449677162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 31 05:42:28.451398 env[1308]: time="2025-10-31T05:42:28.449706649Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 31 05:42:28.451398 env[1308]: time="2025-10-31T05:42:28.449726310Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 31 05:42:28.447533 polkitd[1348]: Started polkitd version 121 Oct 31 05:42:28.453636 bash[1343]: Updated "/home/core/.ssh/authorized_keys" Oct 31 05:42:28.452807 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 31 05:42:28.460193 env[1308]: time="2025-10-31T05:42:28.449893888Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 31 05:42:28.460193 env[1308]: time="2025-10-31T05:42:28.458524853Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 31 05:42:28.460193 env[1308]: time="2025-10-31T05:42:28.458840058Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 05:42:28.460193 env[1308]: time="2025-10-31T05:42:28.458871831Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 31 05:42:28.460193 env[1308]: time="2025-10-31T05:42:28.458973735Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 31 05:42:28.460193 env[1308]: time="2025-10-31T05:42:28.459003614Z" level=info msg="metadata content store policy set" policy=shared Oct 31 05:42:28.486347 polkitd[1348]: Loading rules from directory /etc/polkit-1/rules.d Oct 31 05:42:28.499806 polkitd[1348]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 31 05:42:28.505512 polkitd[1348]: Finished loading, compiling and executing 2 rules Oct 31 05:42:28.507270 env[1308]: time="2025-10-31T05:42:28.505851972Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 31 05:42:28.507270 env[1308]: time="2025-10-31T05:42:28.506008259Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 31 05:42:28.507270 env[1308]: time="2025-10-31T05:42:28.506066207Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 31 05:42:28.507270 env[1308]: time="2025-10-31T05:42:28.506204064Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 31 05:42:28.507270 env[1308]: time="2025-10-31T05:42:28.506265729Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 31 05:42:28.507270 env[1308]: time="2025-10-31T05:42:28.506300964Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 31 05:42:28.507270 env[1308]: time="2025-10-31T05:42:28.506350948Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 31 05:42:28.507270 env[1308]: time="2025-10-31T05:42:28.506382249Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 31 05:42:28.507270 env[1308]: time="2025-10-31T05:42:28.506434546Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 31 05:42:28.507270 env[1308]: time="2025-10-31T05:42:28.506465256Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 31 05:42:28.507270 env[1308]: time="2025-10-31T05:42:28.506513447Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 31 05:42:28.507270 env[1308]: time="2025-10-31T05:42:28.506544691Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 31 05:42:28.507270 env[1308]: time="2025-10-31T05:42:28.506836706Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 31 05:42:28.507270 env[1308]: time="2025-10-31T05:42:28.507066623Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 31 05:42:28.508406 dbus-daemon[1278]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 31 05:42:28.508639 systemd[1]: Started polkit.service. Oct 31 05:42:28.510803 polkitd[1348]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 31 05:42:28.514952 env[1308]: time="2025-10-31T05:42:28.514643679Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 31 05:42:28.514952 env[1308]: time="2025-10-31T05:42:28.514725461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 31 05:42:28.514952 env[1308]: time="2025-10-31T05:42:28.514757905Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 31 05:42:28.514952 env[1308]: time="2025-10-31T05:42:28.514866466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 31 05:42:28.514952 env[1308]: time="2025-10-31T05:42:28.514897202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 31 05:42:28.514952 env[1308]: time="2025-10-31T05:42:28.514929550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 31 05:42:28.515393 env[1308]: time="2025-10-31T05:42:28.514956367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 31 05:42:28.515393 env[1308]: time="2025-10-31T05:42:28.514978550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 31 05:42:28.515393 env[1308]: time="2025-10-31T05:42:28.515003172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 31 05:42:28.515393 env[1308]: time="2025-10-31T05:42:28.515023518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 31 05:42:28.515393 env[1308]: time="2025-10-31T05:42:28.515042341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 31 05:42:28.515393 env[1308]: time="2025-10-31T05:42:28.515065280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 31 05:42:28.515393 env[1308]: time="2025-10-31T05:42:28.515348729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 31 05:42:28.515393 env[1308]: time="2025-10-31T05:42:28.515376842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 31 05:42:28.515794 env[1308]: time="2025-10-31T05:42:28.515398962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 31 05:42:28.515794 env[1308]: time="2025-10-31T05:42:28.515418673Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 31 05:42:28.515794 env[1308]: time="2025-10-31T05:42:28.515442503Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 31 05:42:28.515794 env[1308]: time="2025-10-31T05:42:28.515465420Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 31 05:42:28.515794 env[1308]: time="2025-10-31T05:42:28.515519703Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 31 05:42:28.515794 env[1308]: time="2025-10-31T05:42:28.515597670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 31 05:42:28.516086 env[1308]: time="2025-10-31T05:42:28.515926003Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 31 05:42:28.516086 env[1308]: time="2025-10-31T05:42:28.516073428Z" level=info msg="Connect containerd service" Oct 31 05:42:28.518828 env[1308]: time="2025-10-31T05:42:28.516199183Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 31 05:42:28.518828 env[1308]: time="2025-10-31T05:42:28.517089400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 05:42:28.518828 env[1308]: time="2025-10-31T05:42:28.518030243Z" level=info msg="Start subscribing containerd event" Oct 31 05:42:28.518828 env[1308]: time="2025-10-31T05:42:28.518103408Z" level=info msg="Start recovering state" Oct 31 05:42:28.518828 env[1308]: time="2025-10-31T05:42:28.518273533Z" level=info msg="Start event monitor" Oct 31 05:42:28.518828 env[1308]: time="2025-10-31T05:42:28.518313158Z" level=info msg="Start snapshots syncer" Oct 31 05:42:28.518828 env[1308]: time="2025-10-31T05:42:28.518338013Z" level=info msg="Start cni network conf syncer for default" Oct 31 05:42:28.518828 env[1308]: time="2025-10-31T05:42:28.518353596Z" level=info msg="Start streaming server" Oct 31 05:42:28.518828 env[1308]: time="2025-10-31T05:42:28.518616949Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 31 05:42:28.518828 env[1308]: time="2025-10-31T05:42:28.518695669Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 31 05:42:28.518937 systemd[1]: Started containerd.service. Oct 31 05:42:28.540868 systemd-hostnamed[1329]: Hostname set to (static) Oct 31 05:42:28.546053 env[1308]: time="2025-10-31T05:42:28.545965664Z" level=info msg="containerd successfully booted in 0.248193s" Oct 31 05:42:28.704238 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Oct 31 05:42:28.742430 extend-filesystems[1332]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 31 05:42:28.742430 extend-filesystems[1332]: old_desc_blocks = 1, new_desc_blocks = 8 Oct 31 05:42:28.742430 extend-filesystems[1332]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Oct 31 05:42:28.749774 extend-filesystems[1280]: Resized filesystem in /dev/vda9 Oct 31 05:42:28.743937 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 31 05:42:28.744439 systemd[1]: Finished extend-filesystems.service. Oct 31 05:42:29.628921 systemd-resolved[1244]: Clock change detected. Flushing caches. Oct 31 05:42:29.630190 systemd-timesyncd[1251]: Contacted time server 185.177.149.33:123 (0.flatcar.pool.ntp.org). Oct 31 05:42:29.630299 systemd-timesyncd[1251]: Initial clock synchronization to Fri 2025-10-31 05:42:29.628829 UTC. Oct 31 05:42:29.756298 tar[1305]: linux-amd64/README.md Oct 31 05:42:29.766445 systemd[1]: Finished prepare-helm.service. Oct 31 05:42:29.786828 locksmithd[1347]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 31 05:42:30.145074 systemd-networkd[1069]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:552:24:19ff:fef4:154a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:552:24:19ff:fef4:154a/64 assigned by NDisc. Oct 31 05:42:30.145090 systemd-networkd[1069]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Oct 31 05:42:30.433438 systemd[1]: Started kubelet.service. Oct 31 05:42:30.734463 sshd_keygen[1311]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 31 05:42:30.769207 systemd[1]: Finished sshd-keygen.service. Oct 31 05:42:30.779428 systemd[1]: Starting issuegen.service... Oct 31 05:42:30.784267 systemd[1]: Started sshd@0-10.244.21.74:22-139.178.68.195:51898.service. Oct 31 05:42:30.792927 systemd[1]: issuegen.service: Deactivated successfully. Oct 31 05:42:30.793563 systemd[1]: Finished issuegen.service. Oct 31 05:42:30.800694 systemd[1]: Starting systemd-user-sessions.service... Oct 31 05:42:30.813123 systemd[1]: Finished systemd-user-sessions.service. Oct 31 05:42:30.818576 systemd[1]: Started getty@tty1.service. Oct 31 05:42:30.824852 systemd[1]: Started serial-getty@ttyS0.service. Oct 31 05:42:30.826372 systemd[1]: Reached target getty.target. Oct 31 05:42:31.177455 kubelet[1375]: E1031 05:42:31.177263 1375 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 05:42:31.182911 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 05:42:31.183237 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 05:42:31.727397 sshd[1391]: Accepted publickey for core from 139.178.68.195 port 51898 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:42:31.730664 sshd[1391]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:42:31.750779 systemd[1]: Created slice user-500.slice. Oct 31 05:42:31.754740 systemd[1]: Starting user-runtime-dir@500.service... Oct 31 05:42:31.758794 systemd-logind[1296]: New session 1 of user core. Oct 31 05:42:31.783219 systemd[1]: Finished user-runtime-dir@500.service. Oct 31 05:42:31.787069 systemd[1]: Starting user@500.service... Oct 31 05:42:31.795257 (systemd)[1404]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:42:31.904271 systemd[1404]: Queued start job for default target default.target. Oct 31 05:42:31.905526 systemd[1404]: Reached target paths.target. Oct 31 05:42:31.905591 systemd[1404]: Reached target sockets.target. Oct 31 05:42:31.905614 systemd[1404]: Reached target timers.target. Oct 31 05:42:31.905635 systemd[1404]: Reached target basic.target. Oct 31 05:42:31.905823 systemd[1]: Started user@500.service. Oct 31 05:42:31.906620 systemd[1404]: Reached target default.target. Oct 31 05:42:31.906691 systemd[1404]: Startup finished in 101ms. Oct 31 05:42:31.908243 systemd[1]: Started session-1.scope. Oct 31 05:42:32.541897 systemd[1]: Started sshd@1-10.244.21.74:22-139.178.68.195:51906.service. Oct 31 05:42:33.441302 sshd[1414]: Accepted publickey for core from 139.178.68.195 port 51906 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:42:33.444056 sshd[1414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:42:33.451444 systemd-logind[1296]: New session 2 of user core. Oct 31 05:42:33.451845 systemd[1]: Started session-2.scope. Oct 31 05:42:34.069387 sshd[1414]: pam_unix(sshd:session): session closed for user core Oct 31 05:42:34.073798 systemd[1]: sshd@1-10.244.21.74:22-139.178.68.195:51906.service: Deactivated successfully. Oct 31 05:42:34.075120 systemd-logind[1296]: Session 2 logged out. Waiting for processes to exit. Oct 31 05:42:34.075216 systemd[1]: session-2.scope: Deactivated successfully. Oct 31 05:42:34.077390 systemd-logind[1296]: Removed session 2. Oct 31 05:42:34.215558 systemd[1]: Started sshd@2-10.244.21.74:22-139.178.68.195:58376.service. Oct 31 05:42:35.111226 sshd[1421]: Accepted publickey for core from 139.178.68.195 port 58376 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:42:35.113288 sshd[1421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:42:35.120843 systemd-logind[1296]: New session 3 of user core. Oct 31 05:42:35.121704 systemd[1]: Started session-3.scope. Oct 31 05:42:35.737892 sshd[1421]: pam_unix(sshd:session): session closed for user core Oct 31 05:42:35.741727 systemd-logind[1296]: Session 3 logged out. Waiting for processes to exit. Oct 31 05:42:35.742193 systemd[1]: sshd@2-10.244.21.74:22-139.178.68.195:58376.service: Deactivated successfully. Oct 31 05:42:35.743298 systemd[1]: session-3.scope: Deactivated successfully. Oct 31 05:42:35.744011 systemd-logind[1296]: Removed session 3. Oct 31 05:42:35.978096 coreos-metadata[1276]: Oct 31 05:42:35.977 WARN failed to locate config-drive, using the metadata service API instead Oct 31 05:42:36.035750 coreos-metadata[1276]: Oct 31 05:42:36.035 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Oct 31 05:42:36.060750 coreos-metadata[1276]: Oct 31 05:42:36.060 INFO Fetch successful Oct 31 05:42:36.061104 coreos-metadata[1276]: Oct 31 05:42:36.060 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 31 05:42:36.093168 coreos-metadata[1276]: Oct 31 05:42:36.092 INFO Fetch successful Oct 31 05:42:36.095107 unknown[1276]: wrote ssh authorized keys file for user: core Oct 31 05:42:36.110905 update-ssh-keys[1431]: Updated "/home/core/.ssh/authorized_keys" Oct 31 05:42:36.111709 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Oct 31 05:42:36.112274 systemd[1]: Reached target multi-user.target. Oct 31 05:42:36.114988 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 31 05:42:36.128307 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 31 05:42:36.128726 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 31 05:42:36.128960 systemd[1]: Startup finished in 7.589s (kernel) + 14.488s (userspace) = 22.077s. Oct 31 05:42:41.435071 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 31 05:42:41.435416 systemd[1]: Stopped kubelet.service. Oct 31 05:42:41.438026 systemd[1]: Starting kubelet.service... Oct 31 05:42:41.615436 systemd[1]: Started kubelet.service. Oct 31 05:42:41.717876 kubelet[1444]: E1031 05:42:41.717709 1444 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 05:42:41.721949 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 05:42:41.722248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 05:42:45.889255 systemd[1]: Started sshd@3-10.244.21.74:22-139.178.68.195:40754.service. Oct 31 05:42:46.794429 sshd[1452]: Accepted publickey for core from 139.178.68.195 port 40754 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:42:46.797767 sshd[1452]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:42:46.805992 systemd[1]: Started session-4.scope. Oct 31 05:42:46.807601 systemd-logind[1296]: New session 4 of user core. Oct 31 05:42:47.427896 sshd[1452]: pam_unix(sshd:session): session closed for user core Oct 31 05:42:47.433175 systemd-logind[1296]: Session 4 logged out. Waiting for processes to exit. Oct 31 05:42:47.434289 systemd[1]: sshd@3-10.244.21.74:22-139.178.68.195:40754.service: Deactivated successfully. Oct 31 05:42:47.435628 systemd[1]: session-4.scope: Deactivated successfully. Oct 31 05:42:47.436375 systemd-logind[1296]: Removed session 4. Oct 31 05:42:47.575152 systemd[1]: Started sshd@4-10.244.21.74:22-139.178.68.195:40768.service. Oct 31 05:42:48.474713 sshd[1459]: Accepted publickey for core from 139.178.68.195 port 40768 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:42:48.477306 sshd[1459]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:42:48.484439 systemd-logind[1296]: New session 5 of user core. Oct 31 05:42:48.484512 systemd[1]: Started session-5.scope. Oct 31 05:42:49.099280 sshd[1459]: pam_unix(sshd:session): session closed for user core Oct 31 05:42:49.103019 systemd-logind[1296]: Session 5 logged out. Waiting for processes to exit. Oct 31 05:42:49.103391 systemd[1]: sshd@4-10.244.21.74:22-139.178.68.195:40768.service: Deactivated successfully. Oct 31 05:42:49.104489 systemd[1]: session-5.scope: Deactivated successfully. Oct 31 05:42:49.105157 systemd-logind[1296]: Removed session 5. Oct 31 05:42:49.246041 systemd[1]: Started sshd@5-10.244.21.74:22-139.178.68.195:40782.service. Oct 31 05:42:50.141669 sshd[1466]: Accepted publickey for core from 139.178.68.195 port 40782 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:42:50.143289 sshd[1466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:42:50.150775 systemd[1]: Started session-6.scope. Oct 31 05:42:50.151769 systemd-logind[1296]: New session 6 of user core. Oct 31 05:42:50.766446 sshd[1466]: pam_unix(sshd:session): session closed for user core Oct 31 05:42:50.770430 systemd-logind[1296]: Session 6 logged out. Waiting for processes to exit. Oct 31 05:42:50.771111 systemd[1]: sshd@5-10.244.21.74:22-139.178.68.195:40782.service: Deactivated successfully. Oct 31 05:42:50.772217 systemd[1]: session-6.scope: Deactivated successfully. Oct 31 05:42:50.772950 systemd-logind[1296]: Removed session 6. Oct 31 05:42:50.914832 systemd[1]: Started sshd@6-10.244.21.74:22-139.178.68.195:40794.service. Oct 31 05:42:51.819217 sshd[1473]: Accepted publickey for core from 139.178.68.195 port 40794 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:42:51.822082 sshd[1473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:42:51.823442 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 31 05:42:51.823850 systemd[1]: Stopped kubelet.service. Oct 31 05:42:51.826301 systemd[1]: Starting kubelet.service... Oct 31 05:42:51.833429 systemd[1]: Started session-7.scope. Oct 31 05:42:51.840639 systemd-logind[1296]: New session 7 of user core. Oct 31 05:42:51.975377 systemd[1]: Started kubelet.service. Oct 31 05:42:52.040607 kubelet[1485]: E1031 05:42:52.039900 1485 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 05:42:52.043291 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 05:42:52.043617 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 05:42:52.320894 sudo[1492]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 31 05:42:52.321312 sudo[1492]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 31 05:42:52.331833 dbus-daemon[1278]: Ѝp\x98TV: received setenforce notice (enforcing=-1536177872) Oct 31 05:42:52.334311 sudo[1492]: pam_unix(sudo:session): session closed for user root Oct 31 05:42:52.484078 sshd[1473]: pam_unix(sshd:session): session closed for user core Oct 31 05:42:52.488561 systemd-logind[1296]: Session 7 logged out. Waiting for processes to exit. Oct 31 05:42:52.489143 systemd[1]: sshd@6-10.244.21.74:22-139.178.68.195:40794.service: Deactivated successfully. Oct 31 05:42:52.490510 systemd[1]: session-7.scope: Deactivated successfully. Oct 31 05:42:52.491275 systemd-logind[1296]: Removed session 7. Oct 31 05:42:52.630291 systemd[1]: Started sshd@7-10.244.21.74:22-139.178.68.195:40806.service. Oct 31 05:42:53.529880 sshd[1496]: Accepted publickey for core from 139.178.68.195 port 40806 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:42:53.532565 sshd[1496]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:42:53.540015 systemd[1]: Started session-8.scope. Oct 31 05:42:53.540735 systemd-logind[1296]: New session 8 of user core. Oct 31 05:42:54.016988 sudo[1501]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 31 05:42:54.017403 sudo[1501]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 31 05:42:54.022639 sudo[1501]: pam_unix(sudo:session): session closed for user root Oct 31 05:42:54.030080 sudo[1500]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 31 05:42:54.030947 sudo[1500]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 31 05:42:54.045611 systemd[1]: Stopping audit-rules.service... Oct 31 05:42:54.046000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 31 05:42:54.051370 kernel: kauditd_printk_skb: 154 callbacks suppressed Oct 31 05:42:54.051491 kernel: audit: type=1305 audit(1761889374.046:166): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 31 05:42:54.051550 auditctl[1504]: No rules Oct 31 05:42:54.046000 audit[1504]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc8aa74590 a2=420 a3=0 items=0 ppid=1 pid=1504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:54.052555 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 05:42:54.052943 systemd[1]: Stopped audit-rules.service. Oct 31 05:42:54.056955 systemd[1]: Starting audit-rules.service... Oct 31 05:42:54.062264 kernel: audit: type=1300 audit(1761889374.046:166): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc8aa74590 a2=420 a3=0 items=0 ppid=1 pid=1504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:54.046000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 31 05:42:54.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:54.071014 kernel: audit: type=1327 audit(1761889374.046:166): proctitle=2F7362696E2F617564697463746C002D44 Oct 31 05:42:54.071120 kernel: audit: type=1131 audit(1761889374.051:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:54.090818 augenrules[1522]: No rules Oct 31 05:42:54.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:54.093957 sudo[1500]: pam_unix(sudo:session): session closed for user root Oct 31 05:42:54.092242 systemd[1]: Finished audit-rules.service. Oct 31 05:42:54.098580 kernel: audit: type=1130 audit(1761889374.091:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:54.091000 audit[1500]: USER_END pid=1500 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 05:42:54.091000 audit[1500]: CRED_DISP pid=1500 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 05:42:54.110859 kernel: audit: type=1106 audit(1761889374.091:169): pid=1500 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 05:42:54.110956 kernel: audit: type=1104 audit(1761889374.091:170): pid=1500 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 05:42:54.243944 sshd[1496]: pam_unix(sshd:session): session closed for user core Oct 31 05:42:54.244000 audit[1496]: USER_END pid=1496 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:42:54.249202 systemd[1]: sshd@7-10.244.21.74:22-139.178.68.195:40806.service: Deactivated successfully. Oct 31 05:42:54.250274 systemd[1]: session-8.scope: Deactivated successfully. Oct 31 05:42:54.251306 systemd-logind[1296]: Session 8 logged out. Waiting for processes to exit. Oct 31 05:42:54.253030 systemd-logind[1296]: Removed session 8. Oct 31 05:42:54.244000 audit[1496]: CRED_DISP pid=1496 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:42:54.260040 kernel: audit: type=1106 audit(1761889374.244:171): pid=1496 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:42:54.260211 kernel: audit: type=1104 audit(1761889374.244:172): pid=1496 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:42:54.260281 kernel: audit: type=1131 audit(1761889374.244:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.244.21.74:22-139.178.68.195:40806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:54.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.244.21.74:22-139.178.68.195:40806 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:54.390262 systemd[1]: Started sshd@8-10.244.21.74:22-139.178.68.195:33036.service. Oct 31 05:42:54.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.244.21.74:22-139.178.68.195:33036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:55.283000 audit[1529]: USER_ACCT pid=1529 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:42:55.285752 sshd[1529]: Accepted publickey for core from 139.178.68.195 port 33036 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:42:55.286000 audit[1529]: CRED_ACQ pid=1529 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:42:55.286000 audit[1529]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff16568a10 a2=3 a3=0 items=0 ppid=1 pid=1529 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:55.286000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 05:42:55.288992 sshd[1529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:42:55.298234 systemd[1]: Started session-9.scope. Oct 31 05:42:55.299353 systemd-logind[1296]: New session 9 of user core. Oct 31 05:42:55.306000 audit[1529]: USER_START pid=1529 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:42:55.309000 audit[1532]: CRED_ACQ pid=1532 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:42:55.766000 audit[1533]: USER_ACCT pid=1533 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 05:42:55.768439 sudo[1533]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 31 05:42:55.767000 audit[1533]: CRED_REFR pid=1533 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 05:42:55.769484 sudo[1533]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 31 05:42:55.771000 audit[1533]: USER_START pid=1533 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 05:42:55.816073 systemd[1]: Starting docker.service... Oct 31 05:42:55.896466 env[1543]: time="2025-10-31T05:42:55.896323921Z" level=info msg="Starting up" Oct 31 05:42:55.899043 env[1543]: time="2025-10-31T05:42:55.899009812Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 31 05:42:55.899185 env[1543]: time="2025-10-31T05:42:55.899153601Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 31 05:42:55.899343 env[1543]: time="2025-10-31T05:42:55.899305364Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 31 05:42:55.899494 env[1543]: time="2025-10-31T05:42:55.899463938Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 31 05:42:55.903684 env[1543]: time="2025-10-31T05:42:55.903650170Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 31 05:42:55.903848 env[1543]: time="2025-10-31T05:42:55.903818653Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 31 05:42:55.903979 env[1543]: time="2025-10-31T05:42:55.903939385Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 31 05:42:55.904087 env[1543]: time="2025-10-31T05:42:55.904060219Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 31 05:42:55.913440 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2750328236-merged.mount: Deactivated successfully. Oct 31 05:42:55.954774 env[1543]: time="2025-10-31T05:42:55.954723586Z" level=warning msg="Your kernel does not support cgroup blkio weight" Oct 31 05:42:55.955060 env[1543]: time="2025-10-31T05:42:55.955031149Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Oct 31 05:42:55.955563 env[1543]: time="2025-10-31T05:42:55.955519420Z" level=info msg="Loading containers: start." Oct 31 05:42:56.043000 audit[1575]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.043000 audit[1575]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffb90667e0 a2=0 a3=7fffb90667cc items=0 ppid=1543 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.043000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Oct 31 05:42:56.047000 audit[1577]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.047000 audit[1577]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc115d83b0 a2=0 a3=7ffc115d839c items=0 ppid=1543 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.047000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Oct 31 05:42:56.051000 audit[1579]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.051000 audit[1579]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc67a6de60 a2=0 a3=7ffc67a6de4c items=0 ppid=1543 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.051000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Oct 31 05:42:56.054000 audit[1581]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.054000 audit[1581]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdc48612d0 a2=0 a3=7ffdc48612bc items=0 ppid=1543 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.054000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Oct 31 05:42:56.060000 audit[1583]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1583 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.060000 audit[1583]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcf69012f0 a2=0 a3=7ffcf69012dc items=0 ppid=1543 pid=1583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.060000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Oct 31 05:42:56.083000 audit[1588]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1588 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.083000 audit[1588]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd17858650 a2=0 a3=7ffd1785863c items=0 ppid=1543 pid=1588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.083000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Oct 31 05:42:56.104000 audit[1590]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.104000 audit[1590]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd12d18bf0 a2=0 a3=7ffd12d18bdc items=0 ppid=1543 pid=1590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.104000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Oct 31 05:42:56.107000 audit[1592]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.107000 audit[1592]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcccdabfb0 a2=0 a3=7ffcccdabf9c items=0 ppid=1543 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.107000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Oct 31 05:42:56.111000 audit[1594]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.111000 audit[1594]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffc671468a0 a2=0 a3=7ffc6714688c items=0 ppid=1543 pid=1594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.111000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 31 05:42:56.122000 audit[1598]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.122000 audit[1598]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffce72be270 a2=0 a3=7ffce72be25c items=0 ppid=1543 pid=1598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.122000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Oct 31 05:42:56.127000 audit[1599]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.127000 audit[1599]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe0b5703c0 a2=0 a3=7ffe0b5703ac items=0 ppid=1543 pid=1599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.127000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 31 05:42:56.148090 kernel: Initializing XFRM netlink socket Oct 31 05:42:56.209289 env[1543]: time="2025-10-31T05:42:56.209233386Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Oct 31 05:42:56.261000 audit[1607]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1607 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.261000 audit[1607]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fff1e006a50 a2=0 a3=7fff1e006a3c items=0 ppid=1543 pid=1607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.261000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Oct 31 05:42:56.272000 audit[1610]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1610 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.272000 audit[1610]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffce95c9030 a2=0 a3=7ffce95c901c items=0 ppid=1543 pid=1610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.272000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Oct 31 05:42:56.277000 audit[1613]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1613 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.277000 audit[1613]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd88db3ad0 a2=0 a3=7ffd88db3abc items=0 ppid=1543 pid=1613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.277000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Oct 31 05:42:56.281000 audit[1615]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1615 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.281000 audit[1615]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe6a073ad0 a2=0 a3=7ffe6a073abc items=0 ppid=1543 pid=1615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.281000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Oct 31 05:42:56.285000 audit[1617]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1617 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.285000 audit[1617]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffd59bdd9e0 a2=0 a3=7ffd59bdd9cc items=0 ppid=1543 pid=1617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.285000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Oct 31 05:42:56.289000 audit[1619]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1619 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.289000 audit[1619]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fffcb1a1200 a2=0 a3=7fffcb1a11ec items=0 ppid=1543 pid=1619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.289000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Oct 31 05:42:56.292000 audit[1621]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1621 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.292000 audit[1621]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffd39bdb570 a2=0 a3=7ffd39bdb55c items=0 ppid=1543 pid=1621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.292000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Oct 31 05:42:56.305000 audit[1624]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1624 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.305000 audit[1624]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7fffa4b02420 a2=0 a3=7fffa4b0240c items=0 ppid=1543 pid=1624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.305000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Oct 31 05:42:56.310000 audit[1627]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1627 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.310000 audit[1627]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffdd7bd4c10 a2=0 a3=7ffdd7bd4bfc items=0 ppid=1543 pid=1627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.310000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Oct 31 05:42:56.313000 audit[1629]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1629 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.313000 audit[1629]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe285bcf10 a2=0 a3=7ffe285bcefc items=0 ppid=1543 pid=1629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.313000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Oct 31 05:42:56.316000 audit[1631]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1631 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.316000 audit[1631]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc3412ee90 a2=0 a3=7ffc3412ee7c items=0 ppid=1543 pid=1631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.316000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Oct 31 05:42:56.319021 systemd-networkd[1069]: docker0: Link UP Oct 31 05:42:56.329000 audit[1635]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1635 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.329000 audit[1635]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcf3c732b0 a2=0 a3=7ffcf3c7329c items=0 ppid=1543 pid=1635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.329000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Oct 31 05:42:56.335000 audit[1636]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1636 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:42:56.335000 audit[1636]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd63cebb20 a2=0 a3=7ffd63cebb0c items=0 ppid=1543 pid=1636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:42:56.335000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Oct 31 05:42:56.337736 env[1543]: time="2025-10-31T05:42:56.337692911Z" level=info msg="Loading containers: done." Oct 31 05:42:56.359398 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2303993237-merged.mount: Deactivated successfully. Oct 31 05:42:56.370000 env[1543]: time="2025-10-31T05:42:56.369948291Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 31 05:42:56.370313 env[1543]: time="2025-10-31T05:42:56.370273691Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Oct 31 05:42:56.370521 env[1543]: time="2025-10-31T05:42:56.370493746Z" level=info msg="Daemon has completed initialization" Oct 31 05:42:56.393848 systemd[1]: Started docker.service. Oct 31 05:42:56.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:42:56.400620 env[1543]: time="2025-10-31T05:42:56.400554680Z" level=info msg="API listen on /run/docker.sock" Oct 31 05:42:57.685972 env[1308]: time="2025-10-31T05:42:57.685758328Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 31 05:42:58.591370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount253313957.mount: Deactivated successfully. Oct 31 05:43:00.186014 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 31 05:43:00.195890 kernel: kauditd_printk_skb: 84 callbacks suppressed Oct 31 05:43:00.196071 kernel: audit: type=1131 audit(1761889380.185:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:00.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:01.520486 env[1308]: time="2025-10-31T05:43:01.520394953Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:01.526286 env[1308]: time="2025-10-31T05:43:01.526239809Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:01.528686 env[1308]: time="2025-10-31T05:43:01.528651373Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:01.531215 env[1308]: time="2025-10-31T05:43:01.531162176Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:01.533784 env[1308]: time="2025-10-31T05:43:01.532578436Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Oct 31 05:43:01.534995 env[1308]: time="2025-10-31T05:43:01.534959667Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 31 05:43:02.295102 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 31 05:43:02.295451 systemd[1]: Stopped kubelet.service. Oct 31 05:43:02.305745 kernel: audit: type=1130 audit(1761889382.293:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:02.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:02.314180 kernel: audit: type=1131 audit(1761889382.293:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:02.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:02.310852 systemd[1]: Starting kubelet.service... Oct 31 05:43:02.554912 systemd[1]: Started kubelet.service. Oct 31 05:43:02.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:02.564657 kernel: audit: type=1130 audit(1761889382.553:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:02.651252 kubelet[1680]: E1031 05:43:02.651073 1680 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 05:43:02.653951 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 05:43:02.654297 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 05:43:02.660991 kernel: audit: type=1131 audit(1761889382.653:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 05:43:02.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 05:43:04.619108 env[1308]: time="2025-10-31T05:43:04.618965643Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:04.621923 env[1308]: time="2025-10-31T05:43:04.621878939Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:04.624392 env[1308]: time="2025-10-31T05:43:04.624352177Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:04.626792 env[1308]: time="2025-10-31T05:43:04.626753003Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:04.628229 env[1308]: time="2025-10-31T05:43:04.628160938Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Oct 31 05:43:04.630964 env[1308]: time="2025-10-31T05:43:04.630926218Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 31 05:43:08.204600 env[1308]: time="2025-10-31T05:43:08.204511986Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:08.209288 env[1308]: time="2025-10-31T05:43:08.208588306Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:08.210984 env[1308]: time="2025-10-31T05:43:08.210948514Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:08.214878 env[1308]: time="2025-10-31T05:43:08.214839401Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:08.215494 env[1308]: time="2025-10-31T05:43:08.215453834Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Oct 31 05:43:08.216423 env[1308]: time="2025-10-31T05:43:08.216375814Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 31 05:43:10.159243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1725015473.mount: Deactivated successfully. Oct 31 05:43:11.282748 env[1308]: time="2025-10-31T05:43:11.282593487Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:11.285498 env[1308]: time="2025-10-31T05:43:11.285451776Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:11.287603 env[1308]: time="2025-10-31T05:43:11.287526800Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:11.289302 env[1308]: time="2025-10-31T05:43:11.289245973Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:11.290163 env[1308]: time="2025-10-31T05:43:11.290067051Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Oct 31 05:43:11.291118 env[1308]: time="2025-10-31T05:43:11.291055753Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 31 05:43:12.038311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount828383833.mount: Deactivated successfully. Oct 31 05:43:12.737032 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 31 05:43:12.737394 systemd[1]: Stopped kubelet.service. Oct 31 05:43:12.754192 kernel: audit: type=1130 audit(1761889392.736:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:12.754346 kernel: audit: type=1131 audit(1761889392.736:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:12.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:12.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:12.743700 systemd[1]: Starting kubelet.service... Oct 31 05:43:13.168438 systemd[1]: Started kubelet.service. Oct 31 05:43:13.176963 kernel: audit: type=1130 audit(1761889393.168:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:13.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:13.286922 kubelet[1696]: E1031 05:43:13.286814 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 05:43:13.288750 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 05:43:13.289081 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 05:43:13.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 05:43:13.295562 kernel: audit: type=1131 audit(1761889393.289:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 05:43:13.983947 env[1308]: time="2025-10-31T05:43:13.983848475Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:13.986686 env[1308]: time="2025-10-31T05:43:13.986649870Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:13.990886 env[1308]: time="2025-10-31T05:43:13.990843817Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:13.993489 env[1308]: time="2025-10-31T05:43:13.993452612Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:13.994717 env[1308]: time="2025-10-31T05:43:13.994676522Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Oct 31 05:43:13.995726 env[1308]: time="2025-10-31T05:43:13.995691467Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 31 05:43:14.066728 update_engine[1297]: I1031 05:43:14.065723 1297 update_attempter.cc:509] Updating boot flags... Oct 31 05:43:14.730817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3598107825.mount: Deactivated successfully. Oct 31 05:43:14.754391 env[1308]: time="2025-10-31T05:43:14.754299556Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:14.757723 env[1308]: time="2025-10-31T05:43:14.757687076Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:14.760731 env[1308]: time="2025-10-31T05:43:14.760697032Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:14.764058 env[1308]: time="2025-10-31T05:43:14.764021132Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:14.766132 env[1308]: time="2025-10-31T05:43:14.765224271Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 31 05:43:14.766936 env[1308]: time="2025-10-31T05:43:14.766884855Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 31 05:43:15.608641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1471560594.mount: Deactivated successfully. Oct 31 05:43:21.340170 env[1308]: time="2025-10-31T05:43:21.340010101Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:21.342917 env[1308]: time="2025-10-31T05:43:21.342881136Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:21.345736 env[1308]: time="2025-10-31T05:43:21.345689921Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:21.348494 env[1308]: time="2025-10-31T05:43:21.348458648Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:21.349886 env[1308]: time="2025-10-31T05:43:21.349824493Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Oct 31 05:43:23.486847 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 31 05:43:23.487185 systemd[1]: Stopped kubelet.service. Oct 31 05:43:23.495703 kernel: audit: type=1130 audit(1761889403.486:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:23.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:23.493568 systemd[1]: Starting kubelet.service... Oct 31 05:43:23.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:23.504696 kernel: audit: type=1131 audit(1761889403.486:218): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:24.088695 systemd[1]: Started kubelet.service. Oct 31 05:43:24.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:24.098042 kernel: audit: type=1130 audit(1761889404.088:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:24.190349 kubelet[1743]: E1031 05:43:24.190272 1743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 05:43:24.193197 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 05:43:24.193491 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 05:43:24.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 05:43:24.201899 kernel: audit: type=1131 audit(1761889404.193:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 05:43:25.217305 systemd[1]: Stopped kubelet.service. Oct 31 05:43:25.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:25.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:25.225717 systemd[1]: Starting kubelet.service... Oct 31 05:43:25.230364 kernel: audit: type=1130 audit(1761889405.217:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:25.230464 kernel: audit: type=1131 audit(1761889405.220:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:25.276549 systemd[1]: Reloading. Oct 31 05:43:25.407497 /usr/lib/systemd/system-generators/torcx-generator[1781]: time="2025-10-31T05:43:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 31 05:43:25.407568 /usr/lib/systemd/system-generators/torcx-generator[1781]: time="2025-10-31T05:43:25Z" level=info msg="torcx already run" Oct 31 05:43:25.552730 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 05:43:25.552774 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 05:43:25.581832 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 05:43:25.737032 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 31 05:43:25.737469 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 31 05:43:25.738199 systemd[1]: Stopped kubelet.service. Oct 31 05:43:25.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 05:43:25.742946 systemd[1]: Starting kubelet.service... Oct 31 05:43:25.746574 kernel: audit: type=1130 audit(1761889405.737:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 31 05:43:25.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:25.898254 systemd[1]: Started kubelet.service. Oct 31 05:43:25.905595 kernel: audit: type=1130 audit(1761889405.898:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:26.036084 kubelet[1844]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 05:43:26.036769 kubelet[1844]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 05:43:26.036898 kubelet[1844]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 05:43:26.037219 kubelet[1844]: I1031 05:43:26.037172 1844 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 05:43:26.410646 kubelet[1844]: I1031 05:43:26.410593 1844 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 31 05:43:26.410646 kubelet[1844]: I1031 05:43:26.410637 1844 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 05:43:26.411051 kubelet[1844]: I1031 05:43:26.411001 1844 server.go:954] "Client rotation is on, will bootstrap in background" Oct 31 05:43:26.454132 kubelet[1844]: E1031 05:43:26.454076 1844 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.21.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.21.74:6443: connect: connection refused" logger="UnhandledError" Oct 31 05:43:26.456127 kubelet[1844]: I1031 05:43:26.456086 1844 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 05:43:26.468646 kubelet[1844]: E1031 05:43:26.468583 1844 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 05:43:26.468646 kubelet[1844]: I1031 05:43:26.468643 1844 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 31 05:43:26.481208 kubelet[1844]: I1031 05:43:26.481171 1844 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 05:43:26.483136 kubelet[1844]: I1031 05:43:26.483093 1844 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 05:43:26.483592 kubelet[1844]: I1031 05:43:26.483255 1844 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-f2mor.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Oct 31 05:43:26.483976 kubelet[1844]: I1031 05:43:26.483950 1844 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 05:43:26.484135 kubelet[1844]: I1031 05:43:26.484112 1844 container_manager_linux.go:304] "Creating device plugin manager" Oct 31 05:43:26.484497 kubelet[1844]: I1031 05:43:26.484464 1844 state_mem.go:36] "Initialized new in-memory state store" Oct 31 05:43:26.489206 kubelet[1844]: I1031 05:43:26.489180 1844 kubelet.go:446] "Attempting to sync node with API server" Oct 31 05:43:26.489372 kubelet[1844]: I1031 05:43:26.489347 1844 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 05:43:26.489565 kubelet[1844]: I1031 05:43:26.489520 1844 kubelet.go:352] "Adding apiserver pod source" Oct 31 05:43:26.489727 kubelet[1844]: I1031 05:43:26.489690 1844 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 05:43:26.497066 kubelet[1844]: W1031 05:43:26.496765 1844 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.21.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-f2mor.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.21.74:6443: connect: connection refused Oct 31 05:43:26.497066 kubelet[1844]: E1031 05:43:26.496856 1844 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.21.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-f2mor.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.21.74:6443: connect: connection refused" logger="UnhandledError" Oct 31 05:43:26.497856 kubelet[1844]: W1031 05:43:26.497419 1844 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.21.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.21.74:6443: connect: connection refused Oct 31 05:43:26.497856 kubelet[1844]: E1031 05:43:26.497475 1844 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.21.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.21.74:6443: connect: connection refused" logger="UnhandledError" Oct 31 05:43:26.497856 kubelet[1844]: I1031 05:43:26.497649 1844 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 31 05:43:26.498276 kubelet[1844]: I1031 05:43:26.498240 1844 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 31 05:43:26.500772 kubelet[1844]: W1031 05:43:26.500708 1844 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 31 05:43:26.504675 kubelet[1844]: I1031 05:43:26.504642 1844 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 05:43:26.504775 kubelet[1844]: I1031 05:43:26.504708 1844 server.go:1287] "Started kubelet" Oct 31 05:43:26.508855 kubelet[1844]: I1031 05:43:26.508795 1844 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 05:43:26.510540 kubelet[1844]: I1031 05:43:26.510503 1844 server.go:479] "Adding debug handlers to kubelet server" Oct 31 05:43:26.513038 kubelet[1844]: I1031 05:43:26.512943 1844 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 05:43:26.513399 kubelet[1844]: I1031 05:43:26.513369 1844 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 05:43:26.515000 audit[1844]: AVC avc: denied { mac_admin } for pid=1844 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:43:26.517288 kubelet[1844]: I1031 05:43:26.517224 1844 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Oct 31 05:43:26.517518 kubelet[1844]: I1031 05:43:26.517470 1844 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Oct 31 05:43:26.518406 kubelet[1844]: I1031 05:43:26.518354 1844 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 05:43:26.522574 kernel: audit: type=1400 audit(1761889406.515:225): avc: denied { mac_admin } for pid=1844 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:43:26.515000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 05:43:26.526586 kernel: audit: type=1401 audit(1761889406.515:225): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 05:43:26.526902 kubelet[1844]: E1031 05:43:26.522860 1844 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.21.74:6443/api/v1/namespaces/default/events\": dial tcp 10.244.21.74:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-f2mor.gb1.brightbox.com.18737d14903c1c36 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-f2mor.gb1.brightbox.com,UID:srv-f2mor.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-f2mor.gb1.brightbox.com,},FirstTimestamp:2025-10-31 05:43:26.50467231 +0000 UTC m=+0.596223541,LastTimestamp:2025-10-31 05:43:26.50467231 +0000 UTC m=+0.596223541,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-f2mor.gb1.brightbox.com,}" Oct 31 05:43:26.515000 audit[1844]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a37830 a1=c000909908 a2=c000a37800 a3=25 items=0 ppid=1 pid=1844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:26.515000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 05:43:26.516000 audit[1844]: AVC avc: denied { mac_admin } for pid=1844 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:43:26.516000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 05:43:26.516000 audit[1844]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b52360 a1=c000909920 a2=c000a378c0 a3=25 items=0 ppid=1 pid=1844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:26.516000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 05:43:26.531098 kubelet[1844]: E1031 05:43:26.529860 1844 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 05:43:26.531098 kubelet[1844]: I1031 05:43:26.530220 1844 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 05:43:26.531767 kubelet[1844]: I1031 05:43:26.531741 1844 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 05:43:26.532213 kubelet[1844]: E1031 05:43:26.532182 1844 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-f2mor.gb1.brightbox.com\" not found" Oct 31 05:43:26.532708 kubelet[1844]: I1031 05:43:26.532683 1844 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 05:43:26.532969 kubelet[1844]: I1031 05:43:26.532949 1844 reconciler.go:26] "Reconciler: start to sync state" Oct 31 05:43:26.534000 audit[1856]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1856 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:26.534000 audit[1856]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fffca328d00 a2=0 a3=7fffca328cec items=0 ppid=1844 pid=1856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:26.535230 kubelet[1844]: W1031 05:43:26.535122 1844 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.21.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.21.74:6443: connect: connection refused Oct 31 05:43:26.535230 kubelet[1844]: E1031 05:43:26.535179 1844 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.21.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.21.74:6443: connect: connection refused" logger="UnhandledError" Oct 31 05:43:26.534000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 31 05:43:26.535509 kubelet[1844]: E1031 05:43:26.535451 1844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.21.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-f2mor.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.21.74:6443: connect: connection refused" interval="200ms" Oct 31 05:43:26.536264 kubelet[1844]: I1031 05:43:26.536236 1844 factory.go:221] Registration of the systemd container factory successfully Oct 31 05:43:26.536463 kubelet[1844]: I1031 05:43:26.536430 1844 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 05:43:26.537000 audit[1857]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1857 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:26.537000 audit[1857]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd74ebd790 a2=0 a3=7ffd74ebd77c items=0 ppid=1844 pid=1857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:26.537000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 31 05:43:26.538610 kubelet[1844]: I1031 05:43:26.538585 1844 factory.go:221] Registration of the containerd container factory successfully Oct 31 05:43:26.540000 audit[1859]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1859 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:26.540000 audit[1859]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe8b37fc70 a2=0 a3=7ffe8b37fc5c items=0 ppid=1844 pid=1859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:26.540000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 31 05:43:26.544000 audit[1861]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1861 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:26.544000 audit[1861]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd921b77a0 a2=0 a3=7ffd921b778c items=0 ppid=1844 pid=1861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:26.544000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 31 05:43:26.576000 audit[1866]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1866 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:26.576000 audit[1866]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcfc61e6d0 a2=0 a3=7ffcfc61e6bc items=0 ppid=1844 pid=1866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:26.576000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 31 05:43:26.577674 kubelet[1844]: I1031 05:43:26.577616 1844 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 31 05:43:26.580000 audit[1868]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1868 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:26.580000 audit[1868]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff4c98e3d0 a2=0 a3=7fff4c98e3bc items=0 ppid=1844 pid=1868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:26.580000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 31 05:43:26.581220 kubelet[1844]: I1031 05:43:26.580946 1844 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 31 05:43:26.581220 kubelet[1844]: I1031 05:43:26.580995 1844 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 31 05:43:26.581220 kubelet[1844]: I1031 05:43:26.581043 1844 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 05:43:26.581220 kubelet[1844]: I1031 05:43:26.581076 1844 kubelet.go:2382] "Starting kubelet main sync loop" Oct 31 05:43:26.581220 kubelet[1844]: E1031 05:43:26.581156 1844 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 05:43:26.581000 audit[1870]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1870 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:26.581000 audit[1870]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed8b5a150 a2=0 a3=7ffed8b5a13c items=0 ppid=1844 pid=1870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:26.581000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 31 05:43:26.583000 audit[1871]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1871 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:26.583000 audit[1871]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc4e1108a0 a2=0 a3=7ffc4e11088c items=0 ppid=1844 pid=1871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:26.583000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 31 05:43:26.585000 audit[1873]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1873 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:26.585000 audit[1873]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd186865e0 a2=0 a3=7ffd186865cc items=0 ppid=1844 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:26.585000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 31 05:43:26.587000 audit[1874]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1874 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:26.587000 audit[1874]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff665e0db0 a2=0 a3=7fff665e0d9c items=0 ppid=1844 pid=1874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:26.587000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 31 05:43:26.589000 audit[1875]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1875 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:26.589000 audit[1875]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fff73894ed0 a2=0 a3=7fff73894ebc items=0 ppid=1844 pid=1875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:26.589000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 31 05:43:26.591414 kubelet[1844]: I1031 05:43:26.591384 1844 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 05:43:26.591414 kubelet[1844]: I1031 05:43:26.591409 1844 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 05:43:26.591598 kubelet[1844]: I1031 05:43:26.591440 1844 state_mem.go:36] "Initialized new in-memory state store" Oct 31 05:43:26.591000 audit[1876]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1876 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:26.591000 audit[1876]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe81d56e30 a2=0 a3=7ffe81d56e1c items=0 ppid=1844 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:26.591000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 31 05:43:26.593677 kubelet[1844]: I1031 05:43:26.593652 1844 policy_none.go:49] "None policy: Start" Oct 31 05:43:26.593861 kubelet[1844]: I1031 05:43:26.593836 1844 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 05:43:26.594017 kubelet[1844]: W1031 05:43:26.593721 1844 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.21.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.21.74:6443: connect: connection refused Oct 31 05:43:26.594134 kubelet[1844]: E1031 05:43:26.594046 1844 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.21.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.21.74:6443: connect: connection refused" logger="UnhandledError" Oct 31 05:43:26.594258 kubelet[1844]: I1031 05:43:26.594233 1844 state_mem.go:35] "Initializing new in-memory state store" Oct 31 05:43:26.601591 kubelet[1844]: I1031 05:43:26.601560 1844 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 31 05:43:26.601000 audit[1844]: AVC avc: denied { mac_admin } for pid=1844 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:43:26.601000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 05:43:26.601000 audit[1844]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b8b170 a1=c0006d9788 a2=c000b8b140 a3=25 items=0 ppid=1 pid=1844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:26.601000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 05:43:26.602232 kubelet[1844]: I1031 05:43:26.602197 1844 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Oct 31 05:43:26.602505 kubelet[1844]: I1031 05:43:26.602479 1844 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 05:43:26.602707 kubelet[1844]: I1031 05:43:26.602654 1844 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 05:43:26.605199 kubelet[1844]: I1031 05:43:26.605174 1844 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 05:43:26.610575 kubelet[1844]: E1031 05:43:26.610529 1844 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 05:43:26.610793 kubelet[1844]: E1031 05:43:26.610767 1844 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-f2mor.gb1.brightbox.com\" not found" Oct 31 05:43:26.693443 kubelet[1844]: E1031 05:43:26.693305 1844 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-f2mor.gb1.brightbox.com\" not found" node="srv-f2mor.gb1.brightbox.com" Oct 31 05:43:26.698358 kubelet[1844]: E1031 05:43:26.698326 1844 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-f2mor.gb1.brightbox.com\" not found" node="srv-f2mor.gb1.brightbox.com" Oct 31 05:43:26.700903 kubelet[1844]: E1031 05:43:26.700876 1844 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-f2mor.gb1.brightbox.com\" not found" node="srv-f2mor.gb1.brightbox.com" Oct 31 05:43:26.706075 kubelet[1844]: I1031 05:43:26.706049 1844 kubelet_node_status.go:75] "Attempting to register node" node="srv-f2mor.gb1.brightbox.com" Oct 31 05:43:26.706771 kubelet[1844]: E1031 05:43:26.706738 1844 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.21.74:6443/api/v1/nodes\": dial tcp 10.244.21.74:6443: connect: connection refused" node="srv-f2mor.gb1.brightbox.com" Oct 31 05:43:26.733678 kubelet[1844]: I1031 05:43:26.733612 1844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/47cc668f8e6437f8ee5e857e0d7ec478-flexvolume-dir\") pod \"kube-controller-manager-srv-f2mor.gb1.brightbox.com\" (UID: \"47cc668f8e6437f8ee5e857e0d7ec478\") " pod="kube-system/kube-controller-manager-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:26.734067 kubelet[1844]: I1031 05:43:26.734000 1844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52642e1bfefc134a2d4b850df0e2f710-kubeconfig\") pod \"kube-scheduler-srv-f2mor.gb1.brightbox.com\" (UID: \"52642e1bfefc134a2d4b850df0e2f710\") " pod="kube-system/kube-scheduler-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:26.734237 kubelet[1844]: I1031 05:43:26.734205 1844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ef9081bf479be92208be9465598674a-ca-certs\") pod \"kube-apiserver-srv-f2mor.gb1.brightbox.com\" (UID: \"2ef9081bf479be92208be9465598674a\") " pod="kube-system/kube-apiserver-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:26.734425 kubelet[1844]: I1031 05:43:26.734396 1844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ef9081bf479be92208be9465598674a-k8s-certs\") pod \"kube-apiserver-srv-f2mor.gb1.brightbox.com\" (UID: \"2ef9081bf479be92208be9465598674a\") " pod="kube-system/kube-apiserver-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:26.734611 kubelet[1844]: I1031 05:43:26.734582 1844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47cc668f8e6437f8ee5e857e0d7ec478-k8s-certs\") pod \"kube-controller-manager-srv-f2mor.gb1.brightbox.com\" (UID: \"47cc668f8e6437f8ee5e857e0d7ec478\") " pod="kube-system/kube-controller-manager-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:26.734803 kubelet[1844]: I1031 05:43:26.734763 1844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/47cc668f8e6437f8ee5e857e0d7ec478-kubeconfig\") pod \"kube-controller-manager-srv-f2mor.gb1.brightbox.com\" (UID: \"47cc668f8e6437f8ee5e857e0d7ec478\") " pod="kube-system/kube-controller-manager-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:26.734964 kubelet[1844]: I1031 05:43:26.734933 1844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47cc668f8e6437f8ee5e857e0d7ec478-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-f2mor.gb1.brightbox.com\" (UID: \"47cc668f8e6437f8ee5e857e0d7ec478\") " pod="kube-system/kube-controller-manager-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:26.735144 kubelet[1844]: I1031 05:43:26.735115 1844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ef9081bf479be92208be9465598674a-usr-share-ca-certificates\") pod \"kube-apiserver-srv-f2mor.gb1.brightbox.com\" (UID: \"2ef9081bf479be92208be9465598674a\") " pod="kube-system/kube-apiserver-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:26.735347 kubelet[1844]: I1031 05:43:26.735304 1844 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47cc668f8e6437f8ee5e857e0d7ec478-ca-certs\") pod \"kube-controller-manager-srv-f2mor.gb1.brightbox.com\" (UID: \"47cc668f8e6437f8ee5e857e0d7ec478\") " pod="kube-system/kube-controller-manager-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:26.736589 kubelet[1844]: E1031 05:43:26.736527 1844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.21.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-f2mor.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.21.74:6443: connect: connection refused" interval="400ms" Oct 31 05:43:26.910349 kubelet[1844]: I1031 05:43:26.910307 1844 kubelet_node_status.go:75] "Attempting to register node" node="srv-f2mor.gb1.brightbox.com" Oct 31 05:43:26.911117 kubelet[1844]: E1031 05:43:26.911083 1844 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.21.74:6443/api/v1/nodes\": dial tcp 10.244.21.74:6443: connect: connection refused" node="srv-f2mor.gb1.brightbox.com" Oct 31 05:43:26.999333 env[1308]: time="2025-10-31T05:43:26.998643740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-f2mor.gb1.brightbox.com,Uid:2ef9081bf479be92208be9465598674a,Namespace:kube-system,Attempt:0,}" Oct 31 05:43:27.000059 env[1308]: time="2025-10-31T05:43:26.999842603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-f2mor.gb1.brightbox.com,Uid:47cc668f8e6437f8ee5e857e0d7ec478,Namespace:kube-system,Attempt:0,}" Oct 31 05:43:27.002179 env[1308]: time="2025-10-31T05:43:27.002122768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-f2mor.gb1.brightbox.com,Uid:52642e1bfefc134a2d4b850df0e2f710,Namespace:kube-system,Attempt:0,}" Oct 31 05:43:27.138056 kubelet[1844]: E1031 05:43:27.137983 1844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.21.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-f2mor.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.21.74:6443: connect: connection refused" interval="800ms" Oct 31 05:43:27.314897 kubelet[1844]: I1031 05:43:27.314764 1844 kubelet_node_status.go:75] "Attempting to register node" node="srv-f2mor.gb1.brightbox.com" Oct 31 05:43:27.315594 kubelet[1844]: E1031 05:43:27.315559 1844 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.21.74:6443/api/v1/nodes\": dial tcp 10.244.21.74:6443: connect: connection refused" node="srv-f2mor.gb1.brightbox.com" Oct 31 05:43:27.434438 kubelet[1844]: W1031 05:43:27.434331 1844 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.21.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-f2mor.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.21.74:6443: connect: connection refused Oct 31 05:43:27.434739 kubelet[1844]: E1031 05:43:27.434452 1844 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.21.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-f2mor.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.21.74:6443: connect: connection refused" logger="UnhandledError" Oct 31 05:43:27.689433 kubelet[1844]: W1031 05:43:27.689172 1844 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.21.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.21.74:6443: connect: connection refused Oct 31 05:43:27.689433 kubelet[1844]: E1031 05:43:27.689264 1844 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.21.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.21.74:6443: connect: connection refused" logger="UnhandledError" Oct 31 05:43:27.732998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1630220045.mount: Deactivated successfully. Oct 31 05:43:27.748617 env[1308]: time="2025-10-31T05:43:27.748520976Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:27.754875 env[1308]: time="2025-10-31T05:43:27.754806682Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:27.757353 env[1308]: time="2025-10-31T05:43:27.757312914Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:27.761109 env[1308]: time="2025-10-31T05:43:27.760975153Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:27.764338 env[1308]: time="2025-10-31T05:43:27.764284225Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:27.766855 env[1308]: time="2025-10-31T05:43:27.766821571Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:27.768426 env[1308]: time="2025-10-31T05:43:27.768390801Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:27.772268 env[1308]: time="2025-10-31T05:43:27.772234277Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:27.776375 env[1308]: time="2025-10-31T05:43:27.776338217Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:27.779333 env[1308]: time="2025-10-31T05:43:27.779281737Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:27.781718 env[1308]: time="2025-10-31T05:43:27.781678624Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:27.782772 env[1308]: time="2025-10-31T05:43:27.782736954Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:27.827294 env[1308]: time="2025-10-31T05:43:27.825870727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 05:43:27.827294 env[1308]: time="2025-10-31T05:43:27.825956172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 05:43:27.827294 env[1308]: time="2025-10-31T05:43:27.825974445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 05:43:27.835072 env[1308]: time="2025-10-31T05:43:27.834965593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 05:43:27.835213 env[1308]: time="2025-10-31T05:43:27.835080056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 05:43:27.835213 env[1308]: time="2025-10-31T05:43:27.835130845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 05:43:27.835547 env[1308]: time="2025-10-31T05:43:27.835466329Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e03e1dba8da9b8bd3ee48e7a64e2fb300e0f5defdf804ae627ebb52735b0f4e9 pid=1894 runtime=io.containerd.runc.v2 Oct 31 05:43:27.835874 env[1308]: time="2025-10-31T05:43:27.835797487Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/433688c9ae2e68cee317a6fcc6db865962609a850c8b04b4cf23c910ff7c5749 pid=1895 runtime=io.containerd.runc.v2 Oct 31 05:43:27.850836 env[1308]: time="2025-10-31T05:43:27.850733984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 05:43:27.851186 env[1308]: time="2025-10-31T05:43:27.850791247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 05:43:27.851368 env[1308]: time="2025-10-31T05:43:27.851143863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 05:43:27.851753 env[1308]: time="2025-10-31T05:43:27.851688512Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/04b31dddf1cebde51cf55c49a83d665af41040e1d405c84b89d08f8cf863a75d pid=1925 runtime=io.containerd.runc.v2 Oct 31 05:43:27.921474 kubelet[1844]: W1031 05:43:27.921345 1844 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.21.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.21.74:6443: connect: connection refused Oct 31 05:43:27.921474 kubelet[1844]: E1031 05:43:27.921411 1844 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.21.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.21.74:6443: connect: connection refused" logger="UnhandledError" Oct 31 05:43:27.939638 kubelet[1844]: E1031 05:43:27.939462 1844 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.21.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-f2mor.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.21.74:6443: connect: connection refused" interval="1.6s" Oct 31 05:43:27.983813 env[1308]: time="2025-10-31T05:43:27.983743435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-f2mor.gb1.brightbox.com,Uid:2ef9081bf479be92208be9465598674a,Namespace:kube-system,Attempt:0,} returns sandbox id \"433688c9ae2e68cee317a6fcc6db865962609a850c8b04b4cf23c910ff7c5749\"" Oct 31 05:43:27.989420 env[1308]: time="2025-10-31T05:43:27.989363115Z" level=info msg="CreateContainer within sandbox \"433688c9ae2e68cee317a6fcc6db865962609a850c8b04b4cf23c910ff7c5749\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 31 05:43:28.014372 env[1308]: time="2025-10-31T05:43:28.014189001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-f2mor.gb1.brightbox.com,Uid:47cc668f8e6437f8ee5e857e0d7ec478,Namespace:kube-system,Attempt:0,} returns sandbox id \"e03e1dba8da9b8bd3ee48e7a64e2fb300e0f5defdf804ae627ebb52735b0f4e9\"" Oct 31 05:43:28.018949 env[1308]: time="2025-10-31T05:43:28.018909753Z" level=info msg="CreateContainer within sandbox \"e03e1dba8da9b8bd3ee48e7a64e2fb300e0f5defdf804ae627ebb52735b0f4e9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 31 05:43:28.028902 env[1308]: time="2025-10-31T05:43:28.027008688Z" level=info msg="CreateContainer within sandbox \"433688c9ae2e68cee317a6fcc6db865962609a850c8b04b4cf23c910ff7c5749\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cfd2e451ce06b21a7099b81a31ba0da9136ed7ad0c734b46a5cd02d73b3a856d\"" Oct 31 05:43:28.030390 env[1308]: time="2025-10-31T05:43:28.030307387Z" level=info msg="StartContainer for \"cfd2e451ce06b21a7099b81a31ba0da9136ed7ad0c734b46a5cd02d73b3a856d\"" Oct 31 05:43:28.046561 env[1308]: time="2025-10-31T05:43:28.046467890Z" level=info msg="CreateContainer within sandbox \"e03e1dba8da9b8bd3ee48e7a64e2fb300e0f5defdf804ae627ebb52735b0f4e9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bb0eed3f8df728b82a56d540273a3526a7a416aae185fb4dc3e2acacd39897e1\"" Oct 31 05:43:28.048116 env[1308]: time="2025-10-31T05:43:28.048062372Z" level=info msg="StartContainer for \"bb0eed3f8df728b82a56d540273a3526a7a416aae185fb4dc3e2acacd39897e1\"" Oct 31 05:43:28.061169 env[1308]: time="2025-10-31T05:43:28.061089665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-f2mor.gb1.brightbox.com,Uid:52642e1bfefc134a2d4b850df0e2f710,Namespace:kube-system,Attempt:0,} returns sandbox id \"04b31dddf1cebde51cf55c49a83d665af41040e1d405c84b89d08f8cf863a75d\"" Oct 31 05:43:28.064814 env[1308]: time="2025-10-31T05:43:28.064776323Z" level=info msg="CreateContainer within sandbox \"04b31dddf1cebde51cf55c49a83d665af41040e1d405c84b89d08f8cf863a75d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 31 05:43:28.066598 kubelet[1844]: W1031 05:43:28.066504 1844 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.21.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.21.74:6443: connect: connection refused Oct 31 05:43:28.066716 kubelet[1844]: E1031 05:43:28.066615 1844 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.21.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.21.74:6443: connect: connection refused" logger="UnhandledError" Oct 31 05:43:28.084796 env[1308]: time="2025-10-31T05:43:28.083063762Z" level=info msg="CreateContainer within sandbox \"04b31dddf1cebde51cf55c49a83d665af41040e1d405c84b89d08f8cf863a75d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9401755d780a01be44d2517394fd6b1291a6a01e862f7fefa7dd81c4d7eaad84\"" Oct 31 05:43:28.091569 env[1308]: time="2025-10-31T05:43:28.088773978Z" level=info msg="StartContainer for \"9401755d780a01be44d2517394fd6b1291a6a01e862f7fefa7dd81c4d7eaad84\"" Oct 31 05:43:28.119579 kubelet[1844]: I1031 05:43:28.118567 1844 kubelet_node_status.go:75] "Attempting to register node" node="srv-f2mor.gb1.brightbox.com" Oct 31 05:43:28.119579 kubelet[1844]: E1031 05:43:28.118992 1844 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.21.74:6443/api/v1/nodes\": dial tcp 10.244.21.74:6443: connect: connection refused" node="srv-f2mor.gb1.brightbox.com" Oct 31 05:43:28.228817 env[1308]: time="2025-10-31T05:43:28.227639749Z" level=info msg="StartContainer for \"cfd2e451ce06b21a7099b81a31ba0da9136ed7ad0c734b46a5cd02d73b3a856d\" returns successfully" Oct 31 05:43:28.243061 env[1308]: time="2025-10-31T05:43:28.242976995Z" level=info msg="StartContainer for \"bb0eed3f8df728b82a56d540273a3526a7a416aae185fb4dc3e2acacd39897e1\" returns successfully" Oct 31 05:43:28.244678 env[1308]: time="2025-10-31T05:43:28.244642415Z" level=info msg="StartContainer for \"9401755d780a01be44d2517394fd6b1291a6a01e862f7fefa7dd81c4d7eaad84\" returns successfully" Oct 31 05:43:28.602704 kubelet[1844]: E1031 05:43:28.602660 1844 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-f2mor.gb1.brightbox.com\" not found" node="srv-f2mor.gb1.brightbox.com" Oct 31 05:43:28.632565 kubelet[1844]: E1031 05:43:28.629900 1844 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-f2mor.gb1.brightbox.com\" not found" node="srv-f2mor.gb1.brightbox.com" Oct 31 05:43:28.634141 kubelet[1844]: E1031 05:43:28.634115 1844 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-f2mor.gb1.brightbox.com\" not found" node="srv-f2mor.gb1.brightbox.com" Oct 31 05:43:28.652828 kubelet[1844]: E1031 05:43:28.652782 1844 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.21.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.21.74:6443: connect: connection refused" logger="UnhandledError" Oct 31 05:43:29.629213 kubelet[1844]: E1031 05:43:29.629161 1844 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-f2mor.gb1.brightbox.com\" not found" node="srv-f2mor.gb1.brightbox.com" Oct 31 05:43:29.633729 kubelet[1844]: E1031 05:43:29.633687 1844 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-f2mor.gb1.brightbox.com\" not found" node="srv-f2mor.gb1.brightbox.com" Oct 31 05:43:29.721827 kubelet[1844]: I1031 05:43:29.721782 1844 kubelet_node_status.go:75] "Attempting to register node" node="srv-f2mor.gb1.brightbox.com" Oct 31 05:43:30.894776 kubelet[1844]: E1031 05:43:30.894695 1844 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-f2mor.gb1.brightbox.com\" not found" node="srv-f2mor.gb1.brightbox.com" Oct 31 05:43:31.032507 kubelet[1844]: I1031 05:43:31.032453 1844 kubelet_node_status.go:78] "Successfully registered node" node="srv-f2mor.gb1.brightbox.com" Oct 31 05:43:31.032507 kubelet[1844]: E1031 05:43:31.032521 1844 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-f2mor.gb1.brightbox.com\": node \"srv-f2mor.gb1.brightbox.com\" not found" Oct 31 05:43:31.133714 kubelet[1844]: I1031 05:43:31.133661 1844 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:31.141507 kubelet[1844]: E1031 05:43:31.141474 1844 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-f2mor.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:31.141725 kubelet[1844]: I1031 05:43:31.141698 1844 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:31.145392 kubelet[1844]: E1031 05:43:31.145295 1844 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-f2mor.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:31.145639 kubelet[1844]: I1031 05:43:31.145595 1844 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:31.148228 kubelet[1844]: E1031 05:43:31.148196 1844 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-f2mor.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:31.501487 kubelet[1844]: I1031 05:43:31.501373 1844 apiserver.go:52] "Watching apiserver" Oct 31 05:43:31.532969 kubelet[1844]: I1031 05:43:31.532908 1844 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 05:43:31.778810 kubelet[1844]: I1031 05:43:31.778333 1844 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:31.794388 kubelet[1844]: W1031 05:43:31.794346 1844 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 31 05:43:32.192261 kubelet[1844]: I1031 05:43:32.192113 1844 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:32.204771 kubelet[1844]: W1031 05:43:32.204721 1844 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 31 05:43:32.970593 systemd[1]: Reloading. Oct 31 05:43:33.110571 /usr/lib/systemd/system-generators/torcx-generator[2138]: time="2025-10-31T05:43:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 31 05:43:33.111258 /usr/lib/systemd/system-generators/torcx-generator[2138]: time="2025-10-31T05:43:33Z" level=info msg="torcx already run" Oct 31 05:43:33.230514 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 31 05:43:33.230570 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 31 05:43:33.263421 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 05:43:33.422870 systemd[1]: Stopping kubelet.service... Oct 31 05:43:33.444428 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 05:43:33.445119 systemd[1]: Stopped kubelet.service. Oct 31 05:43:33.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:33.448672 kernel: kauditd_printk_skb: 46 callbacks suppressed Oct 31 05:43:33.448837 kernel: audit: type=1131 audit(1761889413.444:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:33.452063 systemd[1]: Starting kubelet.service... Oct 31 05:43:34.846947 systemd[1]: Started kubelet.service. Oct 31 05:43:34.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:34.861051 kernel: audit: type=1130 audit(1761889414.846:241): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:34.955516 kubelet[2197]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 05:43:34.956209 kubelet[2197]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 05:43:34.956209 kubelet[2197]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 05:43:34.956209 kubelet[2197]: I1031 05:43:34.955835 2197 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 05:43:34.976303 kubelet[2197]: I1031 05:43:34.976254 2197 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 31 05:43:34.976303 kubelet[2197]: I1031 05:43:34.976292 2197 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 05:43:34.976665 kubelet[2197]: I1031 05:43:34.976639 2197 server.go:954] "Client rotation is on, will bootstrap in background" Oct 31 05:43:34.981403 kubelet[2197]: I1031 05:43:34.981371 2197 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 31 05:43:34.986552 kubelet[2197]: I1031 05:43:34.986501 2197 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 05:43:34.999682 kubelet[2197]: E1031 05:43:34.999629 2197 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 05:43:34.999682 kubelet[2197]: I1031 05:43:34.999678 2197 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 31 05:43:35.011118 kubelet[2197]: I1031 05:43:35.011061 2197 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 05:43:35.011920 kubelet[2197]: I1031 05:43:35.011860 2197 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 05:43:35.012241 kubelet[2197]: I1031 05:43:35.011918 2197 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-f2mor.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Oct 31 05:43:35.012484 kubelet[2197]: I1031 05:43:35.012250 2197 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 05:43:35.012484 kubelet[2197]: I1031 05:43:35.012273 2197 container_manager_linux.go:304] "Creating device plugin manager" Oct 31 05:43:35.012484 kubelet[2197]: I1031 05:43:35.012338 2197 state_mem.go:36] "Initialized new in-memory state store" Oct 31 05:43:35.017661 kubelet[2197]: I1031 05:43:35.017614 2197 kubelet.go:446] "Attempting to sync node with API server" Oct 31 05:43:35.017661 kubelet[2197]: I1031 05:43:35.017671 2197 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 05:43:35.017921 kubelet[2197]: I1031 05:43:35.017707 2197 kubelet.go:352] "Adding apiserver pod source" Oct 31 05:43:35.017921 kubelet[2197]: I1031 05:43:35.017727 2197 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 05:43:35.043763 kubelet[2197]: I1031 05:43:35.043718 2197 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 31 05:43:35.044323 kubelet[2197]: I1031 05:43:35.044293 2197 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 31 05:43:35.065961 kubelet[2197]: I1031 05:43:35.065564 2197 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 05:43:35.065961 kubelet[2197]: I1031 05:43:35.065628 2197 server.go:1287] "Started kubelet" Oct 31 05:43:35.080664 kubelet[2197]: I1031 05:43:35.079871 2197 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 05:43:35.086996 kubelet[2197]: I1031 05:43:35.085451 2197 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 05:43:35.088671 kubelet[2197]: I1031 05:43:35.088120 2197 server.go:479] "Adding debug handlers to kubelet server" Oct 31 05:43:35.092000 audit[2197]: AVC avc: denied { mac_admin } for pid=2197 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:43:35.094099 kubelet[2197]: I1031 05:43:35.093148 2197 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 05:43:35.100650 kernel: audit: type=1400 audit(1761889415.092:242): avc: denied { mac_admin } for pid=2197 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:43:35.100777 kernel: audit: type=1401 audit(1761889415.092:242): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 05:43:35.092000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 05:43:35.104033 kubelet[2197]: I1031 05:43:35.103597 2197 kubelet.go:1507] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins_registry: invalid argument" Oct 31 05:43:35.104033 kubelet[2197]: I1031 05:43:35.103744 2197 kubelet.go:1511] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/plugins: invalid argument" Oct 31 05:43:35.104033 kubelet[2197]: I1031 05:43:35.103789 2197 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 05:43:35.092000 audit[2197]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000826e70 a1=c0005a1890 a2=c000826e40 a3=25 items=0 ppid=1 pid=2197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:35.106606 kubelet[2197]: E1031 05:43:35.106578 2197 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 05:43:35.109255 kubelet[2197]: I1031 05:43:35.109210 2197 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 05:43:35.113699 kernel: audit: type=1300 audit(1761889415.092:242): arch=c000003e syscall=188 success=no exit=-22 a0=c000826e70 a1=c0005a1890 a2=c000826e40 a3=25 items=0 ppid=1 pid=2197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:35.117466 kubelet[2197]: I1031 05:43:35.113979 2197 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 05:43:35.117466 kubelet[2197]: I1031 05:43:35.117165 2197 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 05:43:35.117466 kubelet[2197]: I1031 05:43:35.117360 2197 reconciler.go:26] "Reconciler: start to sync state" Oct 31 05:43:35.092000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 05:43:35.124243 kubelet[2197]: I1031 05:43:35.124216 2197 factory.go:221] Registration of the systemd container factory successfully Oct 31 05:43:35.125182 kubelet[2197]: I1031 05:43:35.124556 2197 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 05:43:35.103000 audit[2197]: AVC avc: denied { mac_admin } for pid=2197 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:43:35.132424 kernel: audit: type=1327 audit(1761889415.092:242): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 05:43:35.133439 kernel: audit: type=1400 audit(1761889415.103:243): avc: denied { mac_admin } for pid=2197 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:43:35.133492 kernel: audit: type=1401 audit(1761889415.103:243): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 05:43:35.103000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 05:43:35.103000 audit[2197]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0005c3c80 a1=c0005a1f38 a2=c000c0a210 a3=25 items=0 ppid=1 pid=2197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:35.145985 kernel: audit: type=1300 audit(1761889415.103:243): arch=c000003e syscall=188 success=no exit=-22 a0=c0005c3c80 a1=c0005a1f38 a2=c000c0a210 a3=25 items=0 ppid=1 pid=2197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:35.103000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 05:43:35.166587 kernel: audit: type=1327 audit(1761889415.103:243): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 05:43:35.178495 kubelet[2197]: I1031 05:43:35.173944 2197 factory.go:221] Registration of the containerd container factory successfully Oct 31 05:43:35.222518 kubelet[2197]: I1031 05:43:35.221517 2197 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 31 05:43:35.223659 kubelet[2197]: I1031 05:43:35.222879 2197 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 31 05:43:35.223659 kubelet[2197]: I1031 05:43:35.222912 2197 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 31 05:43:35.223659 kubelet[2197]: I1031 05:43:35.222962 2197 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 05:43:35.223659 kubelet[2197]: I1031 05:43:35.222979 2197 kubelet.go:2382] "Starting kubelet main sync loop" Oct 31 05:43:35.223659 kubelet[2197]: E1031 05:43:35.223062 2197 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 05:43:35.323582 kubelet[2197]: E1031 05:43:35.323492 2197 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 31 05:43:35.324407 kubelet[2197]: I1031 05:43:35.324381 2197 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 05:43:35.324605 kubelet[2197]: I1031 05:43:35.324576 2197 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 05:43:35.324749 kubelet[2197]: I1031 05:43:35.324727 2197 state_mem.go:36] "Initialized new in-memory state store" Oct 31 05:43:35.325165 kubelet[2197]: I1031 05:43:35.325137 2197 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 31 05:43:35.325322 kubelet[2197]: I1031 05:43:35.325278 2197 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 31 05:43:35.325458 kubelet[2197]: I1031 05:43:35.325435 2197 policy_none.go:49] "None policy: Start" Oct 31 05:43:35.325591 kubelet[2197]: I1031 05:43:35.325568 2197 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 05:43:35.325723 kubelet[2197]: I1031 05:43:35.325701 2197 state_mem.go:35] "Initializing new in-memory state store" Oct 31 05:43:35.326085 kubelet[2197]: I1031 05:43:35.326060 2197 state_mem.go:75] "Updated machine memory state" Oct 31 05:43:35.333648 kubelet[2197]: I1031 05:43:35.333603 2197 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 31 05:43:35.333905 kubelet[2197]: I1031 05:43:35.333861 2197 server.go:94] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr(label=system_u:object_r:container_file_t:s0) /var/lib/kubelet/device-plugins/: invalid argument" Oct 31 05:43:35.333000 audit[2197]: AVC avc: denied { mac_admin } for pid=2197 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:43:35.333000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 31 05:43:35.333000 audit[2197]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000fdea20 a1=c000fab4d0 a2=c000fde9f0 a3=25 items=0 ppid=1 pid=2197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:35.333000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 31 05:43:35.335577 kubelet[2197]: I1031 05:43:35.334584 2197 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 05:43:35.335880 kubelet[2197]: I1031 05:43:35.335800 2197 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 05:43:35.339577 kubelet[2197]: I1031 05:43:35.339338 2197 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 05:43:35.342762 kubelet[2197]: E1031 05:43:35.342729 2197 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 05:43:35.459400 kubelet[2197]: I1031 05:43:35.459256 2197 kubelet_node_status.go:75] "Attempting to register node" node="srv-f2mor.gb1.brightbox.com" Oct 31 05:43:35.473195 kubelet[2197]: I1031 05:43:35.473155 2197 kubelet_node_status.go:124] "Node was previously registered" node="srv-f2mor.gb1.brightbox.com" Oct 31 05:43:35.473641 kubelet[2197]: I1031 05:43:35.473464 2197 kubelet_node_status.go:78] "Successfully registered node" node="srv-f2mor.gb1.brightbox.com" Oct 31 05:43:35.526050 kubelet[2197]: I1031 05:43:35.525999 2197 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:35.526615 kubelet[2197]: I1031 05:43:35.526581 2197 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:35.526827 kubelet[2197]: I1031 05:43:35.526802 2197 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:35.548349 kubelet[2197]: W1031 05:43:35.548309 2197 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 31 05:43:35.548997 kubelet[2197]: W1031 05:43:35.548969 2197 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 31 05:43:35.549124 kubelet[2197]: E1031 05:43:35.549035 2197 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-f2mor.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:35.549124 kubelet[2197]: W1031 05:43:35.549130 2197 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 31 05:43:35.549616 kubelet[2197]: E1031 05:43:35.549173 2197 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-f2mor.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:35.621242 kubelet[2197]: I1031 05:43:35.621168 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47cc668f8e6437f8ee5e857e0d7ec478-k8s-certs\") pod \"kube-controller-manager-srv-f2mor.gb1.brightbox.com\" (UID: \"47cc668f8e6437f8ee5e857e0d7ec478\") " pod="kube-system/kube-controller-manager-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:35.621514 kubelet[2197]: I1031 05:43:35.621286 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/47cc668f8e6437f8ee5e857e0d7ec478-kubeconfig\") pod \"kube-controller-manager-srv-f2mor.gb1.brightbox.com\" (UID: \"47cc668f8e6437f8ee5e857e0d7ec478\") " pod="kube-system/kube-controller-manager-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:35.621514 kubelet[2197]: I1031 05:43:35.621376 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47cc668f8e6437f8ee5e857e0d7ec478-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-f2mor.gb1.brightbox.com\" (UID: \"47cc668f8e6437f8ee5e857e0d7ec478\") " pod="kube-system/kube-controller-manager-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:35.621514 kubelet[2197]: I1031 05:43:35.621412 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ef9081bf479be92208be9465598674a-ca-certs\") pod \"kube-apiserver-srv-f2mor.gb1.brightbox.com\" (UID: \"2ef9081bf479be92208be9465598674a\") " pod="kube-system/kube-apiserver-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:35.621722 kubelet[2197]: I1031 05:43:35.621506 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ef9081bf479be92208be9465598674a-k8s-certs\") pod \"kube-apiserver-srv-f2mor.gb1.brightbox.com\" (UID: \"2ef9081bf479be92208be9465598674a\") " pod="kube-system/kube-apiserver-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:35.621722 kubelet[2197]: I1031 05:43:35.621606 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47cc668f8e6437f8ee5e857e0d7ec478-ca-certs\") pod \"kube-controller-manager-srv-f2mor.gb1.brightbox.com\" (UID: \"47cc668f8e6437f8ee5e857e0d7ec478\") " pod="kube-system/kube-controller-manager-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:35.621722 kubelet[2197]: I1031 05:43:35.621674 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/47cc668f8e6437f8ee5e857e0d7ec478-flexvolume-dir\") pod \"kube-controller-manager-srv-f2mor.gb1.brightbox.com\" (UID: \"47cc668f8e6437f8ee5e857e0d7ec478\") " pod="kube-system/kube-controller-manager-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:35.621910 kubelet[2197]: I1031 05:43:35.621738 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52642e1bfefc134a2d4b850df0e2f710-kubeconfig\") pod \"kube-scheduler-srv-f2mor.gb1.brightbox.com\" (UID: \"52642e1bfefc134a2d4b850df0e2f710\") " pod="kube-system/kube-scheduler-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:35.621910 kubelet[2197]: I1031 05:43:35.621808 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ef9081bf479be92208be9465598674a-usr-share-ca-certificates\") pod \"kube-apiserver-srv-f2mor.gb1.brightbox.com\" (UID: \"2ef9081bf479be92208be9465598674a\") " pod="kube-system/kube-apiserver-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:36.042501 kubelet[2197]: I1031 05:43:36.042435 2197 apiserver.go:52] "Watching apiserver" Oct 31 05:43:36.118232 kubelet[2197]: I1031 05:43:36.118168 2197 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 05:43:36.164188 kubelet[2197]: I1031 05:43:36.163641 2197 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-f2mor.gb1.brightbox.com" podStartSLOduration=4.163563573 podStartE2EDuration="4.163563573s" podCreationTimestamp="2025-10-31 05:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 05:43:36.162832768 +0000 UTC m=+1.276749997" watchObservedRunningTime="2025-10-31 05:43:36.163563573 +0000 UTC m=+1.277480794" Oct 31 05:43:36.174585 kubelet[2197]: I1031 05:43:36.174508 2197 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-f2mor.gb1.brightbox.com" podStartSLOduration=5.174491432 podStartE2EDuration="5.174491432s" podCreationTimestamp="2025-10-31 05:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 05:43:36.174346265 +0000 UTC m=+1.288263508" watchObservedRunningTime="2025-10-31 05:43:36.174491432 +0000 UTC m=+1.288408666" Oct 31 05:43:36.199213 kubelet[2197]: I1031 05:43:36.199114 2197 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-f2mor.gb1.brightbox.com" podStartSLOduration=1.199093754 podStartE2EDuration="1.199093754s" podCreationTimestamp="2025-10-31 05:43:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 05:43:36.185854476 +0000 UTC m=+1.299771703" watchObservedRunningTime="2025-10-31 05:43:36.199093754 +0000 UTC m=+1.313010988" Oct 31 05:43:36.273561 kubelet[2197]: I1031 05:43:36.273482 2197 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:36.274662 kubelet[2197]: I1031 05:43:36.274606 2197 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:36.286840 kubelet[2197]: W1031 05:43:36.286788 2197 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 31 05:43:36.287024 kubelet[2197]: E1031 05:43:36.286856 2197 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-f2mor.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:36.287195 kubelet[2197]: W1031 05:43:36.287148 2197 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Oct 31 05:43:36.287322 kubelet[2197]: E1031 05:43:36.287194 2197 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-f2mor.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-f2mor.gb1.brightbox.com" Oct 31 05:43:38.944351 kubelet[2197]: I1031 05:43:38.944289 2197 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 31 05:43:38.945579 env[1308]: time="2025-10-31T05:43:38.945381027Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 31 05:43:38.946133 kubelet[2197]: I1031 05:43:38.945631 2197 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 31 05:43:39.847011 kubelet[2197]: I1031 05:43:39.846909 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88c9b798-7dba-400c-a477-97d2130b0331-lib-modules\") pod \"kube-proxy-z4sj2\" (UID: \"88c9b798-7dba-400c-a477-97d2130b0331\") " pod="kube-system/kube-proxy-z4sj2" Oct 31 05:43:39.847268 kubelet[2197]: I1031 05:43:39.847040 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/88c9b798-7dba-400c-a477-97d2130b0331-kube-proxy\") pod \"kube-proxy-z4sj2\" (UID: \"88c9b798-7dba-400c-a477-97d2130b0331\") " pod="kube-system/kube-proxy-z4sj2" Oct 31 05:43:39.847268 kubelet[2197]: I1031 05:43:39.847118 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88c9b798-7dba-400c-a477-97d2130b0331-xtables-lock\") pod \"kube-proxy-z4sj2\" (UID: \"88c9b798-7dba-400c-a477-97d2130b0331\") " pod="kube-system/kube-proxy-z4sj2" Oct 31 05:43:39.847268 kubelet[2197]: I1031 05:43:39.847153 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sn5q\" (UniqueName: \"kubernetes.io/projected/88c9b798-7dba-400c-a477-97d2130b0331-kube-api-access-7sn5q\") pod \"kube-proxy-z4sj2\" (UID: \"88c9b798-7dba-400c-a477-97d2130b0331\") " pod="kube-system/kube-proxy-z4sj2" Oct 31 05:43:39.969362 kubelet[2197]: I1031 05:43:39.969271 2197 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 31 05:43:40.041507 kubelet[2197]: I1031 05:43:40.041428 2197 status_manager.go:890] "Failed to get status for pod" podUID="4dc58df2-cd03-4dfd-952d-b874ce0cd3f5" pod="tigera-operator/tigera-operator-7dcd859c48-2j9lj" err="pods \"tigera-operator-7dcd859c48-2j9lj\" is forbidden: User \"system:node:srv-f2mor.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'srv-f2mor.gb1.brightbox.com' and this object" Oct 31 05:43:40.042936 kubelet[2197]: W1031 05:43:40.042859 2197 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:srv-f2mor.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'srv-f2mor.gb1.brightbox.com' and this object Oct 31 05:43:40.043031 kubelet[2197]: E1031 05:43:40.042945 2197 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:srv-f2mor.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'srv-f2mor.gb1.brightbox.com' and this object" logger="UnhandledError" Oct 31 05:43:40.049783 kubelet[2197]: I1031 05:43:40.049740 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4dc58df2-cd03-4dfd-952d-b874ce0cd3f5-var-lib-calico\") pod \"tigera-operator-7dcd859c48-2j9lj\" (UID: \"4dc58df2-cd03-4dfd-952d-b874ce0cd3f5\") " pod="tigera-operator/tigera-operator-7dcd859c48-2j9lj" Oct 31 05:43:40.049937 kubelet[2197]: I1031 05:43:40.049823 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr9dt\" (UniqueName: \"kubernetes.io/projected/4dc58df2-cd03-4dfd-952d-b874ce0cd3f5-kube-api-access-fr9dt\") pod \"tigera-operator-7dcd859c48-2j9lj\" (UID: \"4dc58df2-cd03-4dfd-952d-b874ce0cd3f5\") " pod="tigera-operator/tigera-operator-7dcd859c48-2j9lj" Oct 31 05:43:40.133097 env[1308]: time="2025-10-31T05:43:40.131784316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z4sj2,Uid:88c9b798-7dba-400c-a477-97d2130b0331,Namespace:kube-system,Attempt:0,}" Oct 31 05:43:40.171757 env[1308]: time="2025-10-31T05:43:40.160662549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 05:43:40.171757 env[1308]: time="2025-10-31T05:43:40.160726653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 05:43:40.171757 env[1308]: time="2025-10-31T05:43:40.160746043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 05:43:40.171757 env[1308]: time="2025-10-31T05:43:40.161171503Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9142f8e149cb16d533904f4ee2e39f9262c31cb9ddbcf99d608ebccaaf495cdd pid=2251 runtime=io.containerd.runc.v2 Oct 31 05:43:40.243917 env[1308]: time="2025-10-31T05:43:40.243818980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z4sj2,Uid:88c9b798-7dba-400c-a477-97d2130b0331,Namespace:kube-system,Attempt:0,} returns sandbox id \"9142f8e149cb16d533904f4ee2e39f9262c31cb9ddbcf99d608ebccaaf495cdd\"" Oct 31 05:43:40.251010 env[1308]: time="2025-10-31T05:43:40.250961394Z" level=info msg="CreateContainer within sandbox \"9142f8e149cb16d533904f4ee2e39f9262c31cb9ddbcf99d608ebccaaf495cdd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 31 05:43:40.279103 env[1308]: time="2025-10-31T05:43:40.279032057Z" level=info msg="CreateContainer within sandbox \"9142f8e149cb16d533904f4ee2e39f9262c31cb9ddbcf99d608ebccaaf495cdd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f692df22b6196ad92261020632ddb427c593b63bd698d05f45f0bf279f9643e8\"" Oct 31 05:43:40.281846 env[1308]: time="2025-10-31T05:43:40.281807211Z" level=info msg="StartContainer for \"f692df22b6196ad92261020632ddb427c593b63bd698d05f45f0bf279f9643e8\"" Oct 31 05:43:40.337599 env[1308]: time="2025-10-31T05:43:40.337523156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2j9lj,Uid:4dc58df2-cd03-4dfd-952d-b874ce0cd3f5,Namespace:tigera-operator,Attempt:0,}" Oct 31 05:43:40.364127 env[1308]: time="2025-10-31T05:43:40.364060425Z" level=info msg="StartContainer for \"f692df22b6196ad92261020632ddb427c593b63bd698d05f45f0bf279f9643e8\" returns successfully" Oct 31 05:43:40.370461 env[1308]: time="2025-10-31T05:43:40.370200674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 05:43:40.370795 env[1308]: time="2025-10-31T05:43:40.370424755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 05:43:40.370795 env[1308]: time="2025-10-31T05:43:40.370475638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 05:43:40.371382 env[1308]: time="2025-10-31T05:43:40.371286626Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d3fbbf18a6d0cbb96e27fcb41c6e6f3e9f329d053f2f7478378113ec3672fd3 pid=2321 runtime=io.containerd.runc.v2 Oct 31 05:43:40.454121 env[1308]: time="2025-10-31T05:43:40.453293862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2j9lj,Uid:4dc58df2-cd03-4dfd-952d-b874ce0cd3f5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1d3fbbf18a6d0cbb96e27fcb41c6e6f3e9f329d053f2f7478378113ec3672fd3\"" Oct 31 05:43:40.457458 env[1308]: time="2025-10-31T05:43:40.457423658Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 31 05:43:40.884000 audit[2393]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2393 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:40.888342 kernel: kauditd_printk_skb: 4 callbacks suppressed Oct 31 05:43:40.888450 kernel: audit: type=1325 audit(1761889420.884:245): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2393 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:40.884000 audit[2393]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd56ab8d90 a2=0 a3=7ffd56ab8d7c items=0 ppid=2302 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:40.900159 kernel: audit: type=1300 audit(1761889420.884:245): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd56ab8d90 a2=0 a3=7ffd56ab8d7c items=0 ppid=2302 pid=2393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:40.884000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 31 05:43:40.905578 kernel: audit: type=1327 audit(1761889420.884:245): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 31 05:43:40.887000 audit[2394]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2394 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:40.887000 audit[2394]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa0e3ff10 a2=0 a3=7fffa0e3fefc items=0 ppid=2302 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:40.918065 kernel: audit: type=1325 audit(1761889420.887:246): table=nat:39 family=2 entries=1 op=nft_register_chain pid=2394 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:40.918194 kernel: audit: type=1300 audit(1761889420.887:246): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa0e3ff10 a2=0 a3=7fffa0e3fefc items=0 ppid=2302 pid=2394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:40.887000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 31 05:43:40.924571 kernel: audit: type=1327 audit(1761889420.887:246): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 31 05:43:40.891000 audit[2395]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=2395 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:40.891000 audit[2395]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0d741490 a2=0 a3=7ffc0d74147c items=0 ppid=2302 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:40.937289 kernel: audit: type=1325 audit(1761889420.891:247): table=filter:40 family=2 entries=1 op=nft_register_chain pid=2395 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:40.937376 kernel: audit: type=1300 audit(1761889420.891:247): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0d741490 a2=0 a3=7ffc0d74147c items=0 ppid=2302 pid=2395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:40.891000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 31 05:43:40.942558 kernel: audit: type=1327 audit(1761889420.891:247): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 31 05:43:40.891000 audit[2396]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:40.891000 audit[2396]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff576a6e40 a2=0 a3=7fff576a6e2c items=0 ppid=2302 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:40.891000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 31 05:43:40.891000 audit[2397]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2397 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:40.891000 audit[2397]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda1dd3dd0 a2=0 a3=7ffda1dd3dbc items=0 ppid=2302 pid=2397 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:40.891000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 31 05:43:40.949559 kernel: audit: type=1325 audit(1761889420.891:248): table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2396 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:40.900000 audit[2398]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2398 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:40.900000 audit[2398]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca5c75800 a2=0 a3=7ffca5c757ec items=0 ppid=2302 pid=2398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:40.900000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 31 05:43:40.986000 audit[2399]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2399 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:40.986000 audit[2399]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffffcaf5ff0 a2=0 a3=7ffffcaf5fdc items=0 ppid=2302 pid=2399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:40.986000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 31 05:43:40.993000 audit[2401]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2401 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:40.993000 audit[2401]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc9537e520 a2=0 a3=7ffc9537e50c items=0 ppid=2302 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:40.993000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 31 05:43:41.001000 audit[2404]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2404 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:41.001000 audit[2404]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcf63bb220 a2=0 a3=7ffcf63bb20c items=0 ppid=2302 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.001000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 31 05:43:41.005000 audit[2405]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2405 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:41.005000 audit[2405]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7f7ea450 a2=0 a3=7ffd7f7ea43c items=0 ppid=2302 pid=2405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.005000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 31 05:43:41.011000 audit[2407]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2407 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:41.011000 audit[2407]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe4d68b6d0 a2=0 a3=7ffe4d68b6bc items=0 ppid=2302 pid=2407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.011000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 31 05:43:41.013000 audit[2408]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2408 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:41.013000 audit[2408]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7ab57cd0 a2=0 a3=7ffc7ab57cbc items=0 ppid=2302 pid=2408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.013000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 31 05:43:41.018000 audit[2410]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2410 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:41.018000 audit[2410]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc0e48de70 a2=0 a3=7ffc0e48de5c items=0 ppid=2302 pid=2410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.018000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 31 05:43:41.025000 audit[2413]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2413 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:41.025000 audit[2413]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd4bf5ff30 a2=0 a3=7ffd4bf5ff1c items=0 ppid=2302 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.025000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 31 05:43:41.027000 audit[2414]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2414 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:41.027000 audit[2414]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe57029090 a2=0 a3=7ffe5702907c items=0 ppid=2302 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.027000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 31 05:43:41.031000 audit[2416]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2416 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:41.031000 audit[2416]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc3196f3e0 a2=0 a3=7ffc3196f3cc items=0 ppid=2302 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.031000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 31 05:43:41.033000 audit[2417]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2417 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:41.033000 audit[2417]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd8823e760 a2=0 a3=7ffd8823e74c items=0 ppid=2302 pid=2417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.033000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 31 05:43:41.037000 audit[2419]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2419 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:41.037000 audit[2419]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffea97d5690 a2=0 a3=7ffea97d567c items=0 ppid=2302 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.037000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 31 05:43:41.044000 audit[2422]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2422 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:41.044000 audit[2422]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffce0ed0e90 a2=0 a3=7ffce0ed0e7c items=0 ppid=2302 pid=2422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.044000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 31 05:43:41.054000 audit[2425]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2425 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:41.054000 audit[2425]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeb56816b0 a2=0 a3=7ffeb568169c items=0 ppid=2302 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.054000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 31 05:43:41.058000 audit[2426]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2426 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:41.058000 audit[2426]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd74675df0 a2=0 a3=7ffd74675ddc items=0 ppid=2302 pid=2426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.058000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 31 05:43:41.062000 audit[2428]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:41.062000 audit[2428]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd226461e0 a2=0 a3=7ffd226461cc items=0 ppid=2302 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.062000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 31 05:43:41.069000 audit[2431]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2431 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:41.069000 audit[2431]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd750dc770 a2=0 a3=7ffd750dc75c items=0 ppid=2302 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.069000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 31 05:43:41.071000 audit[2432]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2432 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:41.071000 audit[2432]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb74d9ac0 a2=0 a3=7fffb74d9aac items=0 ppid=2302 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.071000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 31 05:43:41.075000 audit[2434]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2434 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 31 05:43:41.075000 audit[2434]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc7bf1f870 a2=0 a3=7ffc7bf1f85c items=0 ppid=2302 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.075000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 31 05:43:41.118000 audit[2440]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2440 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:43:41.118000 audit[2440]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff4ee03840 a2=0 a3=7fff4ee0382c items=0 ppid=2302 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.118000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:43:41.133000 audit[2440]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2440 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:43:41.133000 audit[2440]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fff4ee03840 a2=0 a3=7fff4ee0382c items=0 ppid=2302 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.133000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:43:41.136000 audit[2445]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2445 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:41.136000 audit[2445]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc8735f0e0 a2=0 a3=7ffc8735f0cc items=0 ppid=2302 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.136000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 31 05:43:41.143000 audit[2447]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2447 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:41.143000 audit[2447]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdd271dd00 a2=0 a3=7ffdd271dcec items=0 ppid=2302 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.143000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 31 05:43:41.149000 audit[2450]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:41.149000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe043521e0 a2=0 a3=7ffe043521cc items=0 ppid=2302 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.149000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 31 05:43:41.152000 audit[2451]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2451 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:41.152000 audit[2451]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd149f22b0 a2=0 a3=7ffd149f229c items=0 ppid=2302 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.152000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 31 05:43:41.159000 audit[2453]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2453 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:41.159000 audit[2453]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff18b734e0 a2=0 a3=7fff18b734cc items=0 ppid=2302 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.159000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 31 05:43:41.161000 audit[2454]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:41.161000 audit[2454]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff93a0bda0 a2=0 a3=7fff93a0bd8c items=0 ppid=2302 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.161000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 31 05:43:41.165000 audit[2456]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2456 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:41.165000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdc008d170 a2=0 a3=7ffdc008d15c items=0 ppid=2302 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.165000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 31 05:43:41.170000 audit[2459]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2459 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:41.170000 audit[2459]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffdee4aabe0 a2=0 a3=7ffdee4aabcc items=0 ppid=2302 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.170000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 31 05:43:41.173000 audit[2460]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:41.173000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0c9a3900 a2=0 a3=7ffe0c9a38ec items=0 ppid=2302 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.173000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 31 05:43:41.176000 audit[2462]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2462 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:41.176000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe30cb5b40 a2=0 a3=7ffe30cb5b2c items=0 ppid=2302 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.176000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 31 05:43:41.179000 audit[2463]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2463 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:41.179000 audit[2463]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe424f0e10 a2=0 a3=7ffe424f0dfc items=0 ppid=2302 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.179000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 31 05:43:41.183000 audit[2465]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2465 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:41.183000 audit[2465]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe7e6fc6f0 a2=0 a3=7ffe7e6fc6dc items=0 ppid=2302 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.183000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 31 05:43:41.189000 audit[2468]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2468 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:41.189000 audit[2468]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffcd74bd00 a2=0 a3=7fffcd74bcec items=0 ppid=2302 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.189000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 31 05:43:41.195000 audit[2471]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2471 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:41.195000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffef8f5dd90 a2=0 a3=7ffef8f5dd7c items=0 ppid=2302 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.195000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 31 05:43:41.197000 audit[2472]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:41.197000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdf73367b0 a2=0 a3=7ffdf733679c items=0 ppid=2302 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.197000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 31 05:43:41.201000 audit[2474]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:41.201000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffedf174660 a2=0 a3=7ffedf17464c items=0 ppid=2302 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.201000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 31 05:43:41.206000 audit[2477]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:41.206000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe13eefac0 a2=0 a3=7ffe13eefaac items=0 ppid=2302 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.206000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 31 05:43:41.208000 audit[2478]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:41.208000 audit[2478]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca27fd9e0 a2=0 a3=7ffca27fd9cc items=0 ppid=2302 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.208000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 31 05:43:41.212000 audit[2480]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2480 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:41.212000 audit[2480]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd3ec213c0 a2=0 a3=7ffd3ec213ac items=0 ppid=2302 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.212000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 31 05:43:41.214000 audit[2481]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:41.214000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6a7616f0 a2=0 a3=7ffc6a7616dc items=0 ppid=2302 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.214000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 31 05:43:41.219000 audit[2483]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2483 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:41.219000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdf550bf30 a2=0 a3=7ffdf550bf1c items=0 ppid=2302 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.219000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 31 05:43:41.228000 audit[2486]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2486 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 31 05:43:41.228000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc45d71dd0 a2=0 a3=7ffc45d71dbc items=0 ppid=2302 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.228000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 31 05:43:41.241000 audit[2488]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2488 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 31 05:43:41.241000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fff319e9a80 a2=0 a3=7fff319e9a6c items=0 ppid=2302 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.241000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:43:41.243000 audit[2488]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 31 05:43:41.243000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff319e9a80 a2=0 a3=7fff319e9a6c items=0 ppid=2302 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:41.243000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:43:42.693812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2354141594.mount: Deactivated successfully. Oct 31 05:43:43.832349 kubelet[2197]: I1031 05:43:43.832238 2197 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z4sj2" podStartSLOduration=4.832217579 podStartE2EDuration="4.832217579s" podCreationTimestamp="2025-10-31 05:43:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 05:43:41.30789018 +0000 UTC m=+6.421807411" watchObservedRunningTime="2025-10-31 05:43:43.832217579 +0000 UTC m=+8.946134810" Oct 31 05:43:44.270094 env[1308]: time="2025-10-31T05:43:44.270028291Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:44.279940 env[1308]: time="2025-10-31T05:43:44.279753048Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:44.282708 env[1308]: time="2025-10-31T05:43:44.282669903Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:44.284760 env[1308]: time="2025-10-31T05:43:44.284724468Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:43:44.285561 env[1308]: time="2025-10-31T05:43:44.285485535Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 31 05:43:44.294058 env[1308]: time="2025-10-31T05:43:44.294014549Z" level=info msg="CreateContainer within sandbox \"1d3fbbf18a6d0cbb96e27fcb41c6e6f3e9f329d053f2f7478378113ec3672fd3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 31 05:43:44.310822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2485652855.mount: Deactivated successfully. Oct 31 05:43:44.331664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3227793537.mount: Deactivated successfully. Oct 31 05:43:44.336801 env[1308]: time="2025-10-31T05:43:44.336737223Z" level=info msg="CreateContainer within sandbox \"1d3fbbf18a6d0cbb96e27fcb41c6e6f3e9f329d053f2f7478378113ec3672fd3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"eb464173ab23b20a09f007248092643c9c3062b74b785ef2023a5d4c2caebd97\"" Oct 31 05:43:44.337764 env[1308]: time="2025-10-31T05:43:44.337725731Z" level=info msg="StartContainer for \"eb464173ab23b20a09f007248092643c9c3062b74b785ef2023a5d4c2caebd97\"" Oct 31 05:43:44.427254 env[1308]: time="2025-10-31T05:43:44.427187411Z" level=info msg="StartContainer for \"eb464173ab23b20a09f007248092643c9c3062b74b785ef2023a5d4c2caebd97\" returns successfully" Oct 31 05:43:47.752320 kubelet[2197]: I1031 05:43:47.752229 2197 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-2j9lj" podStartSLOduration=4.918212501 podStartE2EDuration="8.752208424s" podCreationTimestamp="2025-10-31 05:43:39 +0000 UTC" firstStartedPulling="2025-10-31 05:43:40.456336289 +0000 UTC m=+5.570253510" lastFinishedPulling="2025-10-31 05:43:44.290332206 +0000 UTC m=+9.404249433" observedRunningTime="2025-10-31 05:43:45.337264388 +0000 UTC m=+10.451181621" watchObservedRunningTime="2025-10-31 05:43:47.752208424 +0000 UTC m=+12.866125657" Oct 31 05:43:51.851105 sudo[1533]: pam_unix(sudo:session): session closed for user root Oct 31 05:43:51.865465 kernel: kauditd_printk_skb: 143 callbacks suppressed Oct 31 05:43:51.865724 kernel: audit: type=1106 audit(1761889431.850:296): pid=1533 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 05:43:51.850000 audit[1533]: USER_END pid=1533 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 05:43:51.850000 audit[1533]: CRED_DISP pid=1533 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 05:43:51.879748 kernel: audit: type=1104 audit(1761889431.850:297): pid=1533 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 31 05:43:52.025242 sshd[1529]: pam_unix(sshd:session): session closed for user core Oct 31 05:43:52.033000 audit[1529]: USER_END pid=1529 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:43:52.043565 kernel: audit: type=1106 audit(1761889432.033:298): pid=1529 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:43:52.045972 systemd[1]: sshd@8-10.244.21.74:22-139.178.68.195:33036.service: Deactivated successfully. Oct 31 05:43:52.047629 systemd-logind[1296]: Session 9 logged out. Waiting for processes to exit. Oct 31 05:43:52.049006 systemd[1]: session-9.scope: Deactivated successfully. Oct 31 05:43:52.050041 systemd-logind[1296]: Removed session 9. Oct 31 05:43:52.033000 audit[1529]: CRED_DISP pid=1529 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:43:52.058566 kernel: audit: type=1104 audit(1761889432.033:299): pid=1529 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:43:52.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.244.21.74:22-139.178.68.195:33036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:52.072564 kernel: audit: type=1131 audit(1761889432.046:300): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.244.21.74:22-139.178.68.195:33036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:43:53.008000 audit[2574]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2574 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:43:53.016743 kernel: audit: type=1325 audit(1761889433.008:301): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2574 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:43:53.008000 audit[2574]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe6cacaac0 a2=0 a3=7ffe6cacaaac items=0 ppid=2302 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:53.030804 kernel: audit: type=1300 audit(1761889433.008:301): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe6cacaac0 a2=0 a3=7ffe6cacaaac items=0 ppid=2302 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:53.008000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:43:53.035947 kernel: audit: type=1327 audit(1761889433.008:301): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:43:53.037000 audit[2574]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2574 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:43:53.042576 kernel: audit: type=1325 audit(1761889433.037:302): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2574 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:43:53.037000 audit[2574]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe6cacaac0 a2=0 a3=0 items=0 ppid=2302 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:53.066583 kernel: audit: type=1300 audit(1761889433.037:302): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe6cacaac0 a2=0 a3=0 items=0 ppid=2302 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:53.037000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:43:53.104000 audit[2576]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2576 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:43:53.104000 audit[2576]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff12e81bf0 a2=0 a3=7fff12e81bdc items=0 ppid=2302 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:53.104000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:43:53.112000 audit[2576]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2576 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:43:53.112000 audit[2576]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff12e81bf0 a2=0 a3=0 items=0 ppid=2302 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:53.112000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:43:56.947000 audit[2578]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:43:56.956559 kernel: kauditd_printk_skb: 7 callbacks suppressed Oct 31 05:43:56.956740 kernel: audit: type=1325 audit(1761889436.947:305): table=filter:93 family=2 entries=17 op=nft_register_rule pid=2578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:43:56.947000 audit[2578]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc2a5aaf40 a2=0 a3=7ffc2a5aaf2c items=0 ppid=2302 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:56.971705 kernel: audit: type=1300 audit(1761889436.947:305): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc2a5aaf40 a2=0 a3=7ffc2a5aaf2c items=0 ppid=2302 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:56.947000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:43:56.986563 kernel: audit: type=1327 audit(1761889436.947:305): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:43:56.970000 audit[2578]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:43:56.996568 kernel: audit: type=1325 audit(1761889436.970:306): table=nat:94 family=2 entries=12 op=nft_register_rule pid=2578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:43:56.970000 audit[2578]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc2a5aaf40 a2=0 a3=0 items=0 ppid=2302 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:57.007574 kernel: audit: type=1300 audit(1761889436.970:306): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc2a5aaf40 a2=0 a3=0 items=0 ppid=2302 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:56.970000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:43:57.015587 kernel: audit: type=1327 audit(1761889436.970:306): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:43:57.055000 audit[2580]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:43:57.060650 kernel: audit: type=1325 audit(1761889437.055:307): table=filter:95 family=2 entries=18 op=nft_register_rule pid=2580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:43:57.055000 audit[2580]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffedc001e90 a2=0 a3=7ffedc001e7c items=0 ppid=2302 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:57.070692 kernel: audit: type=1300 audit(1761889437.055:307): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffedc001e90 a2=0 a3=7ffedc001e7c items=0 ppid=2302 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:57.070793 kernel: audit: type=1327 audit(1761889437.055:307): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:43:57.055000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:43:57.080000 audit[2580]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:43:57.080000 audit[2580]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffedc001e90 a2=0 a3=0 items=0 ppid=2302 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:57.080000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:43:57.086569 kernel: audit: type=1325 audit(1761889437.080:308): table=nat:96 family=2 entries=12 op=nft_register_rule pid=2580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:43:58.161000 audit[2582]: NETFILTER_CFG table=filter:97 family=2 entries=19 op=nft_register_rule pid=2582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:43:58.161000 audit[2582]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffce1519860 a2=0 a3=7ffce151984c items=0 ppid=2302 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:58.161000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:43:58.171000 audit[2582]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:43:58.171000 audit[2582]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffce1519860 a2=0 a3=0 items=0 ppid=2302 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:58.171000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:43:58.969000 audit[2584]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=2584 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:43:58.969000 audit[2584]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd6c9b1cc0 a2=0 a3=7ffd6c9b1cac items=0 ppid=2302 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:58.969000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:43:58.977000 audit[2584]: NETFILTER_CFG table=nat:100 family=2 entries=12 op=nft_register_rule pid=2584 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:43:58.977000 audit[2584]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd6c9b1cc0 a2=0 a3=0 items=0 ppid=2302 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:58.977000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:43:58.989718 kubelet[2197]: I1031 05:43:58.989642 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dznb\" (UniqueName: \"kubernetes.io/projected/49455a44-c3fc-4eb8-9296-5491a18c02bb-kube-api-access-5dznb\") pod \"calico-typha-6cb5c85948-hf9v7\" (UID: \"49455a44-c3fc-4eb8-9296-5491a18c02bb\") " pod="calico-system/calico-typha-6cb5c85948-hf9v7" Oct 31 05:43:58.990445 kubelet[2197]: I1031 05:43:58.989740 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49455a44-c3fc-4eb8-9296-5491a18c02bb-tigera-ca-bundle\") pod \"calico-typha-6cb5c85948-hf9v7\" (UID: \"49455a44-c3fc-4eb8-9296-5491a18c02bb\") " pod="calico-system/calico-typha-6cb5c85948-hf9v7" Oct 31 05:43:58.990445 kubelet[2197]: I1031 05:43:58.989774 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/49455a44-c3fc-4eb8-9296-5491a18c02bb-typha-certs\") pod \"calico-typha-6cb5c85948-hf9v7\" (UID: \"49455a44-c3fc-4eb8-9296-5491a18c02bb\") " pod="calico-system/calico-typha-6cb5c85948-hf9v7" Oct 31 05:43:59.090305 kubelet[2197]: I1031 05:43:59.090241 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e384dce0-3518-42ee-a1c2-5b13c086db95-cni-net-dir\") pod \"calico-node-hq9tg\" (UID: \"e384dce0-3518-42ee-a1c2-5b13c086db95\") " pod="calico-system/calico-node-hq9tg" Oct 31 05:43:59.090305 kubelet[2197]: I1031 05:43:59.090309 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2v62\" (UniqueName: \"kubernetes.io/projected/e384dce0-3518-42ee-a1c2-5b13c086db95-kube-api-access-p2v62\") pod \"calico-node-hq9tg\" (UID: \"e384dce0-3518-42ee-a1c2-5b13c086db95\") " pod="calico-system/calico-node-hq9tg" Oct 31 05:43:59.090658 kubelet[2197]: I1031 05:43:59.090346 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e384dce0-3518-42ee-a1c2-5b13c086db95-cni-log-dir\") pod \"calico-node-hq9tg\" (UID: \"e384dce0-3518-42ee-a1c2-5b13c086db95\") " pod="calico-system/calico-node-hq9tg" Oct 31 05:43:59.090658 kubelet[2197]: I1031 05:43:59.090378 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e384dce0-3518-42ee-a1c2-5b13c086db95-flexvol-driver-host\") pod \"calico-node-hq9tg\" (UID: \"e384dce0-3518-42ee-a1c2-5b13c086db95\") " pod="calico-system/calico-node-hq9tg" Oct 31 05:43:59.090658 kubelet[2197]: I1031 05:43:59.090406 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e384dce0-3518-42ee-a1c2-5b13c086db95-cni-bin-dir\") pod \"calico-node-hq9tg\" (UID: \"e384dce0-3518-42ee-a1c2-5b13c086db95\") " pod="calico-system/calico-node-hq9tg" Oct 31 05:43:59.090658 kubelet[2197]: I1031 05:43:59.090433 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e384dce0-3518-42ee-a1c2-5b13c086db95-var-run-calico\") pod \"calico-node-hq9tg\" (UID: \"e384dce0-3518-42ee-a1c2-5b13c086db95\") " pod="calico-system/calico-node-hq9tg" Oct 31 05:43:59.090658 kubelet[2197]: I1031 05:43:59.090470 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e384dce0-3518-42ee-a1c2-5b13c086db95-tigera-ca-bundle\") pod \"calico-node-hq9tg\" (UID: \"e384dce0-3518-42ee-a1c2-5b13c086db95\") " pod="calico-system/calico-node-hq9tg" Oct 31 05:43:59.090972 kubelet[2197]: I1031 05:43:59.090603 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e384dce0-3518-42ee-a1c2-5b13c086db95-var-lib-calico\") pod \"calico-node-hq9tg\" (UID: \"e384dce0-3518-42ee-a1c2-5b13c086db95\") " pod="calico-system/calico-node-hq9tg" Oct 31 05:43:59.090972 kubelet[2197]: I1031 05:43:59.090664 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e384dce0-3518-42ee-a1c2-5b13c086db95-node-certs\") pod \"calico-node-hq9tg\" (UID: \"e384dce0-3518-42ee-a1c2-5b13c086db95\") " pod="calico-system/calico-node-hq9tg" Oct 31 05:43:59.090972 kubelet[2197]: I1031 05:43:59.090696 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e384dce0-3518-42ee-a1c2-5b13c086db95-lib-modules\") pod \"calico-node-hq9tg\" (UID: \"e384dce0-3518-42ee-a1c2-5b13c086db95\") " pod="calico-system/calico-node-hq9tg" Oct 31 05:43:59.090972 kubelet[2197]: I1031 05:43:59.090722 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e384dce0-3518-42ee-a1c2-5b13c086db95-xtables-lock\") pod \"calico-node-hq9tg\" (UID: \"e384dce0-3518-42ee-a1c2-5b13c086db95\") " pod="calico-system/calico-node-hq9tg" Oct 31 05:43:59.090972 kubelet[2197]: I1031 05:43:59.090768 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e384dce0-3518-42ee-a1c2-5b13c086db95-policysync\") pod \"calico-node-hq9tg\" (UID: \"e384dce0-3518-42ee-a1c2-5b13c086db95\") " pod="calico-system/calico-node-hq9tg" Oct 31 05:43:59.171035 kubelet[2197]: E1031 05:43:59.170971 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6jdvb" podUID="749c5f31-df45-44a4-9a60-d28a8f071a0b" Oct 31 05:43:59.191505 kubelet[2197]: I1031 05:43:59.191447 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/749c5f31-df45-44a4-9a60-d28a8f071a0b-registration-dir\") pod \"csi-node-driver-6jdvb\" (UID: \"749c5f31-df45-44a4-9a60-d28a8f071a0b\") " pod="calico-system/csi-node-driver-6jdvb" Oct 31 05:43:59.191840 kubelet[2197]: I1031 05:43:59.191574 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/749c5f31-df45-44a4-9a60-d28a8f071a0b-varrun\") pod \"csi-node-driver-6jdvb\" (UID: \"749c5f31-df45-44a4-9a60-d28a8f071a0b\") " pod="calico-system/csi-node-driver-6jdvb" Oct 31 05:43:59.191840 kubelet[2197]: I1031 05:43:59.191646 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9rn6\" (UniqueName: \"kubernetes.io/projected/749c5f31-df45-44a4-9a60-d28a8f071a0b-kube-api-access-q9rn6\") pod \"csi-node-driver-6jdvb\" (UID: \"749c5f31-df45-44a4-9a60-d28a8f071a0b\") " pod="calico-system/csi-node-driver-6jdvb" Oct 31 05:43:59.191840 kubelet[2197]: I1031 05:43:59.191782 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/749c5f31-df45-44a4-9a60-d28a8f071a0b-socket-dir\") pod \"csi-node-driver-6jdvb\" (UID: \"749c5f31-df45-44a4-9a60-d28a8f071a0b\") " pod="calico-system/csi-node-driver-6jdvb" Oct 31 05:43:59.192084 kubelet[2197]: I1031 05:43:59.191863 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/749c5f31-df45-44a4-9a60-d28a8f071a0b-kubelet-dir\") pod \"csi-node-driver-6jdvb\" (UID: \"749c5f31-df45-44a4-9a60-d28a8f071a0b\") " pod="calico-system/csi-node-driver-6jdvb" Oct 31 05:43:59.195943 kubelet[2197]: E1031 05:43:59.195902 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.196097 kubelet[2197]: W1031 05:43:59.195945 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.196969 kubelet[2197]: E1031 05:43:59.196934 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.206493 env[1308]: time="2025-10-31T05:43:59.205642996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cb5c85948-hf9v7,Uid:49455a44-c3fc-4eb8-9296-5491a18c02bb,Namespace:calico-system,Attempt:0,}" Oct 31 05:43:59.218300 kubelet[2197]: E1031 05:43:59.218268 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.218526 kubelet[2197]: W1031 05:43:59.218496 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.218724 kubelet[2197]: E1031 05:43:59.218695 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.281928 env[1308]: time="2025-10-31T05:43:59.276442243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 05:43:59.281928 env[1308]: time="2025-10-31T05:43:59.276998430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 05:43:59.281928 env[1308]: time="2025-10-31T05:43:59.277084682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 05:43:59.281928 env[1308]: time="2025-10-31T05:43:59.277693623Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/84d7076c346b0c46a3702d851445204b38b702e73f12fe948b72fdb736a0c2a2 pid=2604 runtime=io.containerd.runc.v2 Oct 31 05:43:59.310972 kubelet[2197]: E1031 05:43:59.310925 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.311216 kubelet[2197]: W1031 05:43:59.311182 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.311397 kubelet[2197]: E1031 05:43:59.311367 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.322687 kubelet[2197]: E1031 05:43:59.322652 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.322913 kubelet[2197]: W1031 05:43:59.322881 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.323051 kubelet[2197]: E1031 05:43:59.323023 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.325797 kubelet[2197]: E1031 05:43:59.325770 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.325797 kubelet[2197]: W1031 05:43:59.325795 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.325989 kubelet[2197]: E1031 05:43:59.325817 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.326980 kubelet[2197]: E1031 05:43:59.326937 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.326980 kubelet[2197]: W1031 05:43:59.326970 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.327227 kubelet[2197]: E1031 05:43:59.327193 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.327343 kubelet[2197]: E1031 05:43:59.327317 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.327343 kubelet[2197]: W1031 05:43:59.327338 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.327578 kubelet[2197]: E1031 05:43:59.327550 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.329771 kubelet[2197]: E1031 05:43:59.329746 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.329771 kubelet[2197]: W1031 05:43:59.329766 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.329966 kubelet[2197]: E1031 05:43:59.329940 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.330103 kubelet[2197]: E1031 05:43:59.330032 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.330269 kubelet[2197]: W1031 05:43:59.330242 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.342325 kubelet[2197]: E1031 05:43:59.330449 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.342325 kubelet[2197]: E1031 05:43:59.330736 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.342325 kubelet[2197]: W1031 05:43:59.330752 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.342325 kubelet[2197]: E1031 05:43:59.330799 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.342325 kubelet[2197]: E1031 05:43:59.331009 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.342325 kubelet[2197]: W1031 05:43:59.331022 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.342325 kubelet[2197]: E1031 05:43:59.331063 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.342325 kubelet[2197]: E1031 05:43:59.331282 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.342325 kubelet[2197]: W1031 05:43:59.331296 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.342325 kubelet[2197]: E1031 05:43:59.331404 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.342990 kubelet[2197]: E1031 05:43:59.331581 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.342990 kubelet[2197]: W1031 05:43:59.331609 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.342990 kubelet[2197]: E1031 05:43:59.331721 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.342990 kubelet[2197]: E1031 05:43:59.331877 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.342990 kubelet[2197]: W1031 05:43:59.331891 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.342990 kubelet[2197]: E1031 05:43:59.331994 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.342990 kubelet[2197]: E1031 05:43:59.332152 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.342990 kubelet[2197]: W1031 05:43:59.332166 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.342990 kubelet[2197]: E1031 05:43:59.332269 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.342990 kubelet[2197]: E1031 05:43:59.332425 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.343518 kubelet[2197]: W1031 05:43:59.332438 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.343518 kubelet[2197]: E1031 05:43:59.332557 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.343518 kubelet[2197]: E1031 05:43:59.332727 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.343518 kubelet[2197]: W1031 05:43:59.332742 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.343518 kubelet[2197]: E1031 05:43:59.332848 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.343518 kubelet[2197]: E1031 05:43:59.333005 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.343518 kubelet[2197]: W1031 05:43:59.333018 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.343518 kubelet[2197]: E1031 05:43:59.333124 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.343518 kubelet[2197]: E1031 05:43:59.333305 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.343518 kubelet[2197]: W1031 05:43:59.333319 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.344135 kubelet[2197]: E1031 05:43:59.333421 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.344135 kubelet[2197]: E1031 05:43:59.333634 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.344135 kubelet[2197]: W1031 05:43:59.333648 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.344135 kubelet[2197]: E1031 05:43:59.333763 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.344135 kubelet[2197]: E1031 05:43:59.341938 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.344135 kubelet[2197]: W1031 05:43:59.341965 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.344135 kubelet[2197]: E1031 05:43:59.342064 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.344815 kubelet[2197]: E1031 05:43:59.344792 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.344963 kubelet[2197]: W1031 05:43:59.344936 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.345517 kubelet[2197]: E1031 05:43:59.345491 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.345714 kubelet[2197]: W1031 05:43:59.345687 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.346232 kubelet[2197]: E1031 05:43:59.345643 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.346372 kubelet[2197]: E1031 05:43:59.346345 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.357605 kubelet[2197]: E1031 05:43:59.357522 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.358211 env[1308]: time="2025-10-31T05:43:59.358089007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hq9tg,Uid:e384dce0-3518-42ee-a1c2-5b13c086db95,Namespace:calico-system,Attempt:0,}" Oct 31 05:43:59.358421 kubelet[2197]: W1031 05:43:59.358391 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.358892 kubelet[2197]: E1031 05:43:59.358870 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.360073 kubelet[2197]: W1031 05:43:59.360044 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.360529 kubelet[2197]: E1031 05:43:59.360506 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.360775 kubelet[2197]: W1031 05:43:59.360739 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.361180 kubelet[2197]: E1031 05:43:59.361158 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.361348 kubelet[2197]: W1031 05:43:59.361320 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.361533 kubelet[2197]: E1031 05:43:59.361494 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.361749 kubelet[2197]: E1031 05:43:59.361721 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.361912 kubelet[2197]: E1031 05:43:59.361885 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.362365 kubelet[2197]: E1031 05:43:59.362339 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.364666 kubelet[2197]: E1031 05:43:59.364641 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.364836 kubelet[2197]: W1031 05:43:59.364809 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.365047 kubelet[2197]: E1031 05:43:59.365020 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.400721 kubelet[2197]: E1031 05:43:59.400665 2197 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 05:43:59.400721 kubelet[2197]: W1031 05:43:59.400713 2197 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 05:43:59.400994 kubelet[2197]: E1031 05:43:59.400742 2197 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 05:43:59.427975 env[1308]: time="2025-10-31T05:43:59.427866092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 05:43:59.428236 env[1308]: time="2025-10-31T05:43:59.427945708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 05:43:59.428236 env[1308]: time="2025-10-31T05:43:59.427963311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 05:43:59.428452 env[1308]: time="2025-10-31T05:43:59.428413144Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b7d7075b94f42aa3cd1353035b4285bab0574232f877cfce584d3eefc463f383 pid=2659 runtime=io.containerd.runc.v2 Oct 31 05:43:59.544873 env[1308]: time="2025-10-31T05:43:59.541162035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hq9tg,Uid:e384dce0-3518-42ee-a1c2-5b13c086db95,Namespace:calico-system,Attempt:0,} returns sandbox id \"b7d7075b94f42aa3cd1353035b4285bab0574232f877cfce584d3eefc463f383\"" Oct 31 05:43:59.550240 env[1308]: time="2025-10-31T05:43:59.548169444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 31 05:43:59.555257 env[1308]: time="2025-10-31T05:43:59.555143530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cb5c85948-hf9v7,Uid:49455a44-c3fc-4eb8-9296-5491a18c02bb,Namespace:calico-system,Attempt:0,} returns sandbox id \"84d7076c346b0c46a3702d851445204b38b702e73f12fe948b72fdb736a0c2a2\"" Oct 31 05:43:59.997000 audit[2712]: NETFILTER_CFG table=filter:101 family=2 entries=22 op=nft_register_rule pid=2712 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:43:59.997000 audit[2712]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcfcade910 a2=0 a3=7ffcfcade8fc items=0 ppid=2302 pid=2712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:43:59.997000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:44:00.002000 audit[2712]: NETFILTER_CFG table=nat:102 family=2 entries=12 op=nft_register_rule pid=2712 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:44:00.002000 audit[2712]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcfcade910 a2=0 a3=0 items=0 ppid=2302 pid=2712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:00.002000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:44:01.103127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1755466474.mount: Deactivated successfully. Oct 31 05:44:01.225711 kubelet[2197]: E1031 05:44:01.225593 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6jdvb" podUID="749c5f31-df45-44a4-9a60-d28a8f071a0b" Oct 31 05:44:01.290152 env[1308]: time="2025-10-31T05:44:01.290097378Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:44:01.294160 env[1308]: time="2025-10-31T05:44:01.294116755Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:44:01.299404 env[1308]: time="2025-10-31T05:44:01.299360374Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:44:01.300229 env[1308]: time="2025-10-31T05:44:01.300187253Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:44:01.301941 env[1308]: time="2025-10-31T05:44:01.301889681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 31 05:44:01.306887 env[1308]: time="2025-10-31T05:44:01.306849967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 31 05:44:01.308853 env[1308]: time="2025-10-31T05:44:01.308780285Z" level=info msg="CreateContainer within sandbox \"b7d7075b94f42aa3cd1353035b4285bab0574232f877cfce584d3eefc463f383\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 31 05:44:01.327728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4196193305.mount: Deactivated successfully. Oct 31 05:44:01.334024 env[1308]: time="2025-10-31T05:44:01.333957676Z" level=info msg="CreateContainer within sandbox \"b7d7075b94f42aa3cd1353035b4285bab0574232f877cfce584d3eefc463f383\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3c60e39f89f6c55086711e695729860dd8e9d472cfde1e7e576121c672a027dc\"" Oct 31 05:44:01.335800 env[1308]: time="2025-10-31T05:44:01.335715859Z" level=info msg="StartContainer for \"3c60e39f89f6c55086711e695729860dd8e9d472cfde1e7e576121c672a027dc\"" Oct 31 05:44:01.431639 env[1308]: time="2025-10-31T05:44:01.430884439Z" level=info msg="StartContainer for \"3c60e39f89f6c55086711e695729860dd8e9d472cfde1e7e576121c672a027dc\" returns successfully" Oct 31 05:44:01.524671 env[1308]: time="2025-10-31T05:44:01.524606432Z" level=info msg="shim disconnected" id=3c60e39f89f6c55086711e695729860dd8e9d472cfde1e7e576121c672a027dc Oct 31 05:44:01.525046 env[1308]: time="2025-10-31T05:44:01.525012159Z" level=warning msg="cleaning up after shim disconnected" id=3c60e39f89f6c55086711e695729860dd8e9d472cfde1e7e576121c672a027dc namespace=k8s.io Oct 31 05:44:01.525202 env[1308]: time="2025-10-31T05:44:01.525173527Z" level=info msg="cleaning up dead shim" Oct 31 05:44:01.536214 env[1308]: time="2025-10-31T05:44:01.536160855Z" level=warning msg="cleanup warnings time=\"2025-10-31T05:44:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2758 runtime=io.containerd.runc.v2\n" Oct 31 05:44:02.103192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c60e39f89f6c55086711e695729860dd8e9d472cfde1e7e576121c672a027dc-rootfs.mount: Deactivated successfully. Oct 31 05:44:03.223903 kubelet[2197]: E1031 05:44:03.223835 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6jdvb" podUID="749c5f31-df45-44a4-9a60-d28a8f071a0b" Oct 31 05:44:04.656088 env[1308]: time="2025-10-31T05:44:04.656018080Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:44:04.657799 env[1308]: time="2025-10-31T05:44:04.657758383Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:44:04.659612 env[1308]: time="2025-10-31T05:44:04.659575484Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:44:04.660414 env[1308]: time="2025-10-31T05:44:04.660378864Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:44:04.661260 env[1308]: time="2025-10-31T05:44:04.661220728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 31 05:44:04.667547 env[1308]: time="2025-10-31T05:44:04.667368442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 31 05:44:04.698246 env[1308]: time="2025-10-31T05:44:04.697975972Z" level=info msg="CreateContainer within sandbox \"84d7076c346b0c46a3702d851445204b38b702e73f12fe948b72fdb736a0c2a2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 31 05:44:04.722908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2105401055.mount: Deactivated successfully. Oct 31 05:44:04.731020 env[1308]: time="2025-10-31T05:44:04.730936154Z" level=info msg="CreateContainer within sandbox \"84d7076c346b0c46a3702d851445204b38b702e73f12fe948b72fdb736a0c2a2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"021bcfbb372b6abc3ad65190851b142e3ee5bc1d22a2bd0c49709c02a4546275\"" Oct 31 05:44:04.733922 env[1308]: time="2025-10-31T05:44:04.733862384Z" level=info msg="StartContainer for \"021bcfbb372b6abc3ad65190851b142e3ee5bc1d22a2bd0c49709c02a4546275\"" Oct 31 05:44:04.858106 env[1308]: time="2025-10-31T05:44:04.858045346Z" level=info msg="StartContainer for \"021bcfbb372b6abc3ad65190851b142e3ee5bc1d22a2bd0c49709c02a4546275\" returns successfully" Oct 31 05:44:05.225666 kubelet[2197]: E1031 05:44:05.225601 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6jdvb" podUID="749c5f31-df45-44a4-9a60-d28a8f071a0b" Oct 31 05:44:06.422299 kubelet[2197]: I1031 05:44:06.422212 2197 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 05:44:07.230726 kubelet[2197]: E1031 05:44:07.230660 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6jdvb" podUID="749c5f31-df45-44a4-9a60-d28a8f071a0b" Oct 31 05:44:09.225345 kubelet[2197]: E1031 05:44:09.224514 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6jdvb" podUID="749c5f31-df45-44a4-9a60-d28a8f071a0b" Oct 31 05:44:10.764645 env[1308]: time="2025-10-31T05:44:10.764554826Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:44:10.768137 env[1308]: time="2025-10-31T05:44:10.768101129Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:44:10.771065 env[1308]: time="2025-10-31T05:44:10.771029788Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:44:10.773710 env[1308]: time="2025-10-31T05:44:10.773667757Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:44:10.774480 env[1308]: time="2025-10-31T05:44:10.774421251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 31 05:44:10.780249 env[1308]: time="2025-10-31T05:44:10.780204859Z" level=info msg="CreateContainer within sandbox \"b7d7075b94f42aa3cd1353035b4285bab0574232f877cfce584d3eefc463f383\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 31 05:44:10.802862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount485546811.mount: Deactivated successfully. Oct 31 05:44:10.820224 env[1308]: time="2025-10-31T05:44:10.820101468Z" level=info msg="CreateContainer within sandbox \"b7d7075b94f42aa3cd1353035b4285bab0574232f877cfce584d3eefc463f383\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8f0a856081e1484cb32557cce0d1cc31eb98b5001adb3e61ec2a04bd790a8984\"" Oct 31 05:44:10.823224 env[1308]: time="2025-10-31T05:44:10.821675172Z" level=info msg="StartContainer for \"8f0a856081e1484cb32557cce0d1cc31eb98b5001adb3e61ec2a04bd790a8984\"" Oct 31 05:44:11.063286 env[1308]: time="2025-10-31T05:44:11.063098055Z" level=info msg="StartContainer for \"8f0a856081e1484cb32557cce0d1cc31eb98b5001adb3e61ec2a04bd790a8984\" returns successfully" Oct 31 05:44:11.224241 kubelet[2197]: E1031 05:44:11.224170 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6jdvb" podUID="749c5f31-df45-44a4-9a60-d28a8f071a0b" Oct 31 05:44:11.464448 kubelet[2197]: I1031 05:44:11.464217 2197 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6cb5c85948-hf9v7" podStartSLOduration=8.358587106 podStartE2EDuration="13.464163851s" podCreationTimestamp="2025-10-31 05:43:58 +0000 UTC" firstStartedPulling="2025-10-31 05:43:59.557228025 +0000 UTC m=+24.671145247" lastFinishedPulling="2025-10-31 05:44:04.662804764 +0000 UTC m=+29.776721992" observedRunningTime="2025-10-31 05:44:05.433659824 +0000 UTC m=+30.547577058" watchObservedRunningTime="2025-10-31 05:44:11.464163851 +0000 UTC m=+36.578081092" Oct 31 05:44:12.112936 env[1308]: time="2025-10-31T05:44:12.112782615Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 05:44:12.159800 env[1308]: time="2025-10-31T05:44:12.156462812Z" level=info msg="shim disconnected" id=8f0a856081e1484cb32557cce0d1cc31eb98b5001adb3e61ec2a04bd790a8984 Oct 31 05:44:12.159800 env[1308]: time="2025-10-31T05:44:12.156736009Z" level=warning msg="cleaning up after shim disconnected" id=8f0a856081e1484cb32557cce0d1cc31eb98b5001adb3e61ec2a04bd790a8984 namespace=k8s.io Oct 31 05:44:12.159800 env[1308]: time="2025-10-31T05:44:12.156760905Z" level=info msg="cleaning up dead shim" Oct 31 05:44:12.158732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f0a856081e1484cb32557cce0d1cc31eb98b5001adb3e61ec2a04bd790a8984-rootfs.mount: Deactivated successfully. Oct 31 05:44:12.169685 env[1308]: time="2025-10-31T05:44:12.169561044Z" level=warning msg="cleanup warnings time=\"2025-10-31T05:44:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2871 runtime=io.containerd.runc.v2\n" Oct 31 05:44:12.212748 kubelet[2197]: I1031 05:44:12.209962 2197 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 31 05:44:12.344134 kubelet[2197]: I1031 05:44:12.344076 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f047019-b3ae-41f9-bdae-4d0664c67b92-tigera-ca-bundle\") pod \"calico-kube-controllers-58d798db8c-5nl8j\" (UID: \"6f047019-b3ae-41f9-bdae-4d0664c67b92\") " pod="calico-system/calico-kube-controllers-58d798db8c-5nl8j" Oct 31 05:44:12.344896 kubelet[2197]: I1031 05:44:12.344864 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv4x2\" (UniqueName: \"kubernetes.io/projected/d36a9e7d-9b1c-4050-ab86-4f0f608f5584-kube-api-access-nv4x2\") pod \"coredns-668d6bf9bc-4vc6k\" (UID: \"d36a9e7d-9b1c-4050-ab86-4f0f608f5584\") " pod="kube-system/coredns-668d6bf9bc-4vc6k" Oct 31 05:44:12.345067 kubelet[2197]: I1031 05:44:12.345035 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/755f8c0d-5dbb-4026-80b5-87b3cb17189f-config-volume\") pod \"coredns-668d6bf9bc-sw8r9\" (UID: \"755f8c0d-5dbb-4026-80b5-87b3cb17189f\") " pod="kube-system/coredns-668d6bf9bc-sw8r9" Oct 31 05:44:12.345219 kubelet[2197]: I1031 05:44:12.345188 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn8m5\" (UniqueName: \"kubernetes.io/projected/6f047019-b3ae-41f9-bdae-4d0664c67b92-kube-api-access-bn8m5\") pod \"calico-kube-controllers-58d798db8c-5nl8j\" (UID: \"6f047019-b3ae-41f9-bdae-4d0664c67b92\") " pod="calico-system/calico-kube-controllers-58d798db8c-5nl8j" Oct 31 05:44:12.345370 kubelet[2197]: I1031 05:44:12.345341 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lkwx\" (UniqueName: \"kubernetes.io/projected/755f8c0d-5dbb-4026-80b5-87b3cb17189f-kube-api-access-4lkwx\") pod \"coredns-668d6bf9bc-sw8r9\" (UID: \"755f8c0d-5dbb-4026-80b5-87b3cb17189f\") " pod="kube-system/coredns-668d6bf9bc-sw8r9" Oct 31 05:44:12.345527 kubelet[2197]: I1031 05:44:12.345498 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ac96e24b-c0dd-48fd-838b-a540fa2a89c0-calico-apiserver-certs\") pod \"calico-apiserver-7ff9f49d5d-dq5pc\" (UID: \"ac96e24b-c0dd-48fd-838b-a540fa2a89c0\") " pod="calico-apiserver/calico-apiserver-7ff9f49d5d-dq5pc" Oct 31 05:44:12.345716 kubelet[2197]: I1031 05:44:12.345687 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d36a9e7d-9b1c-4050-ab86-4f0f608f5584-config-volume\") pod \"coredns-668d6bf9bc-4vc6k\" (UID: \"d36a9e7d-9b1c-4050-ab86-4f0f608f5584\") " pod="kube-system/coredns-668d6bf9bc-4vc6k" Oct 31 05:44:12.345866 kubelet[2197]: I1031 05:44:12.345837 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6crh\" (UniqueName: \"kubernetes.io/projected/ac96e24b-c0dd-48fd-838b-a540fa2a89c0-kube-api-access-v6crh\") pod \"calico-apiserver-7ff9f49d5d-dq5pc\" (UID: \"ac96e24b-c0dd-48fd-838b-a540fa2a89c0\") " pod="calico-apiserver/calico-apiserver-7ff9f49d5d-dq5pc" Oct 31 05:44:12.452669 env[1308]: time="2025-10-31T05:44:12.444620783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 31 05:44:12.453485 kubelet[2197]: I1031 05:44:12.453428 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4433a427-a60f-4547-95ae-ea306784cb66-calico-apiserver-certs\") pod \"calico-apiserver-7ff9f49d5d-sjjrx\" (UID: \"4433a427-a60f-4547-95ae-ea306784cb66\") " pod="calico-apiserver/calico-apiserver-7ff9f49d5d-sjjrx" Oct 31 05:44:12.454083 kubelet[2197]: I1031 05:44:12.453581 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8ck5\" (UniqueName: \"kubernetes.io/projected/2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c-kube-api-access-c8ck5\") pod \"goldmane-666569f655-xdjq9\" (UID: \"2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c\") " pod="calico-system/goldmane-666569f655-xdjq9" Oct 31 05:44:12.454083 kubelet[2197]: I1031 05:44:12.453637 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7vt9\" (UniqueName: \"kubernetes.io/projected/41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9-kube-api-access-z7vt9\") pod \"whisker-7b969d5456-zlxmn\" (UID: \"41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9\") " pod="calico-system/whisker-7b969d5456-zlxmn" Oct 31 05:44:12.454083 kubelet[2197]: I1031 05:44:12.453687 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9-whisker-ca-bundle\") pod \"whisker-7b969d5456-zlxmn\" (UID: \"41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9\") " pod="calico-system/whisker-7b969d5456-zlxmn" Oct 31 05:44:12.454893 kubelet[2197]: I1031 05:44:12.453736 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c-config\") pod \"goldmane-666569f655-xdjq9\" (UID: \"2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c\") " pod="calico-system/goldmane-666569f655-xdjq9" Oct 31 05:44:12.454893 kubelet[2197]: I1031 05:44:12.454374 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c-goldmane-ca-bundle\") pod \"goldmane-666569f655-xdjq9\" (UID: \"2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c\") " pod="calico-system/goldmane-666569f655-xdjq9" Oct 31 05:44:12.454893 kubelet[2197]: I1031 05:44:12.454408 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9-whisker-backend-key-pair\") pod \"whisker-7b969d5456-zlxmn\" (UID: \"41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9\") " pod="calico-system/whisker-7b969d5456-zlxmn" Oct 31 05:44:12.454893 kubelet[2197]: I1031 05:44:12.454503 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr6kh\" (UniqueName: \"kubernetes.io/projected/4433a427-a60f-4547-95ae-ea306784cb66-kube-api-access-dr6kh\") pod \"calico-apiserver-7ff9f49d5d-sjjrx\" (UID: \"4433a427-a60f-4547-95ae-ea306784cb66\") " pod="calico-apiserver/calico-apiserver-7ff9f49d5d-sjjrx" Oct 31 05:44:12.454893 kubelet[2197]: I1031 05:44:12.454558 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c-goldmane-key-pair\") pod \"goldmane-666569f655-xdjq9\" (UID: \"2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c\") " pod="calico-system/goldmane-666569f655-xdjq9" Oct 31 05:44:12.585012 env[1308]: time="2025-10-31T05:44:12.584298157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sw8r9,Uid:755f8c0d-5dbb-4026-80b5-87b3cb17189f,Namespace:kube-system,Attempt:0,}" Oct 31 05:44:12.617415 env[1308]: time="2025-10-31T05:44:12.617352595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff9f49d5d-dq5pc,Uid:ac96e24b-c0dd-48fd-838b-a540fa2a89c0,Namespace:calico-apiserver,Attempt:0,}" Oct 31 05:44:12.622205 env[1308]: time="2025-10-31T05:44:12.622160660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58d798db8c-5nl8j,Uid:6f047019-b3ae-41f9-bdae-4d0664c67b92,Namespace:calico-system,Attempt:0,}" Oct 31 05:44:12.640076 env[1308]: time="2025-10-31T05:44:12.639988178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4vc6k,Uid:d36a9e7d-9b1c-4050-ab86-4f0f608f5584,Namespace:kube-system,Attempt:0,}" Oct 31 05:44:12.668993 env[1308]: time="2025-10-31T05:44:12.668918294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff9f49d5d-sjjrx,Uid:4433a427-a60f-4547-95ae-ea306784cb66,Namespace:calico-apiserver,Attempt:0,}" Oct 31 05:44:12.672295 env[1308]: time="2025-10-31T05:44:12.672246175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xdjq9,Uid:2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c,Namespace:calico-system,Attempt:0,}" Oct 31 05:44:12.687455 env[1308]: time="2025-10-31T05:44:12.687390169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b969d5456-zlxmn,Uid:41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9,Namespace:calico-system,Attempt:0,}" Oct 31 05:44:13.009883 env[1308]: time="2025-10-31T05:44:13.009774657Z" level=error msg="Failed to destroy network for sandbox \"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.010469 env[1308]: time="2025-10-31T05:44:13.010410510Z" level=error msg="encountered an error cleaning up failed sandbox \"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.010574 env[1308]: time="2025-10-31T05:44:13.010483711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff9f49d5d-dq5pc,Uid:ac96e24b-c0dd-48fd-838b-a540fa2a89c0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.011093 kubelet[2197]: E1031 05:44:13.011003 2197 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.013672 kubelet[2197]: E1031 05:44:13.013612 2197 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-dq5pc" Oct 31 05:44:13.013773 kubelet[2197]: E1031 05:44:13.013695 2197 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-dq5pc" Oct 31 05:44:13.014124 kubelet[2197]: E1031 05:44:13.013832 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7ff9f49d5d-dq5pc_calico-apiserver(ac96e24b-c0dd-48fd-838b-a540fa2a89c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7ff9f49d5d-dq5pc_calico-apiserver(ac96e24b-c0dd-48fd-838b-a540fa2a89c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-dq5pc" podUID="ac96e24b-c0dd-48fd-838b-a540fa2a89c0" Oct 31 05:44:13.035687 env[1308]: time="2025-10-31T05:44:13.035573236Z" level=error msg="Failed to destroy network for sandbox \"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.036158 env[1308]: time="2025-10-31T05:44:13.036107350Z" level=error msg="encountered an error cleaning up failed sandbox \"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.036597 env[1308]: time="2025-10-31T05:44:13.036381676Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4vc6k,Uid:d36a9e7d-9b1c-4050-ab86-4f0f608f5584,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.038015 kubelet[2197]: E1031 05:44:13.037950 2197 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.038139 kubelet[2197]: E1031 05:44:13.038060 2197 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4vc6k" Oct 31 05:44:13.038139 kubelet[2197]: E1031 05:44:13.038114 2197 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4vc6k" Oct 31 05:44:13.038406 kubelet[2197]: E1031 05:44:13.038200 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4vc6k_kube-system(d36a9e7d-9b1c-4050-ab86-4f0f608f5584)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4vc6k_kube-system(d36a9e7d-9b1c-4050-ab86-4f0f608f5584)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4vc6k" podUID="d36a9e7d-9b1c-4050-ab86-4f0f608f5584" Oct 31 05:44:13.056387 env[1308]: time="2025-10-31T05:44:13.056282864Z" level=error msg="Failed to destroy network for sandbox \"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.057246 env[1308]: time="2025-10-31T05:44:13.057195799Z" level=error msg="encountered an error cleaning up failed sandbox \"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.057480 env[1308]: time="2025-10-31T05:44:13.057419187Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sw8r9,Uid:755f8c0d-5dbb-4026-80b5-87b3cb17189f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.057912 env[1308]: time="2025-10-31T05:44:13.057858152Z" level=error msg="Failed to destroy network for sandbox \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.058143 kubelet[2197]: E1031 05:44:13.058055 2197 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.058257 kubelet[2197]: E1031 05:44:13.058171 2197 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sw8r9" Oct 31 05:44:13.058333 kubelet[2197]: E1031 05:44:13.058220 2197 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sw8r9" Oct 31 05:44:13.058422 kubelet[2197]: E1031 05:44:13.058349 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-sw8r9_kube-system(755f8c0d-5dbb-4026-80b5-87b3cb17189f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-sw8r9_kube-system(755f8c0d-5dbb-4026-80b5-87b3cb17189f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-sw8r9" podUID="755f8c0d-5dbb-4026-80b5-87b3cb17189f" Oct 31 05:44:13.059478 env[1308]: time="2025-10-31T05:44:13.059430097Z" level=error msg="encountered an error cleaning up failed sandbox \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.059842 env[1308]: time="2025-10-31T05:44:13.059761587Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58d798db8c-5nl8j,Uid:6f047019-b3ae-41f9-bdae-4d0664c67b92,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.061194 kubelet[2197]: E1031 05:44:13.060926 2197 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.061194 kubelet[2197]: E1031 05:44:13.061014 2197 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58d798db8c-5nl8j" Oct 31 05:44:13.061194 kubelet[2197]: E1031 05:44:13.061054 2197 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58d798db8c-5nl8j" Oct 31 05:44:13.061808 kubelet[2197]: E1031 05:44:13.061129 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58d798db8c-5nl8j_calico-system(6f047019-b3ae-41f9-bdae-4d0664c67b92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58d798db8c-5nl8j_calico-system(6f047019-b3ae-41f9-bdae-4d0664c67b92)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58d798db8c-5nl8j" podUID="6f047019-b3ae-41f9-bdae-4d0664c67b92" Oct 31 05:44:13.077378 kubelet[2197]: I1031 05:44:13.077320 2197 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 05:44:13.176614 kernel: kauditd_printk_skb: 20 callbacks suppressed Oct 31 05:44:13.176875 kernel: audit: type=1325 audit(1761889453.163:315): table=filter:103 family=2 entries=21 op=nft_register_rule pid=3082 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:44:13.163000 audit[3082]: NETFILTER_CFG table=filter:103 family=2 entries=21 op=nft_register_rule pid=3082 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:44:13.194323 kernel: audit: type=1300 audit(1761889453.163:315): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd4ba17460 a2=0 a3=7ffd4ba1744c items=0 ppid=2302 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:13.163000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffd4ba17460 a2=0 a3=7ffd4ba1744c items=0 ppid=2302 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:13.163000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:44:13.203667 kernel: audit: type=1327 audit(1761889453.163:315): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:44:13.209342 env[1308]: time="2025-10-31T05:44:13.204689264Z" level=error msg="Failed to destroy network for sandbox \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.175000 audit[3082]: NETFILTER_CFG table=nat:104 family=2 entries=19 op=nft_register_chain pid=3082 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:44:13.208675 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02-shm.mount: Deactivated successfully. Oct 31 05:44:13.218850 kernel: audit: type=1325 audit(1761889453.175:316): table=nat:104 family=2 entries=19 op=nft_register_chain pid=3082 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:44:13.218920 kernel: audit: type=1300 audit(1761889453.175:316): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd4ba17460 a2=0 a3=7ffd4ba1744c items=0 ppid=2302 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:13.175000 audit[3082]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd4ba17460 a2=0 a3=7ffd4ba1744c items=0 ppid=2302 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:13.221451 env[1308]: time="2025-10-31T05:44:13.211384721Z" level=error msg="encountered an error cleaning up failed sandbox \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.221451 env[1308]: time="2025-10-31T05:44:13.211498040Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xdjq9,Uid:2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.221770 kubelet[2197]: E1031 05:44:13.217973 2197 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.221770 kubelet[2197]: E1031 05:44:13.218078 2197 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-xdjq9" Oct 31 05:44:13.221770 kubelet[2197]: E1031 05:44:13.218146 2197 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-xdjq9" Oct 31 05:44:13.222109 kubelet[2197]: E1031 05:44:13.218235 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-xdjq9_calico-system(2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-xdjq9_calico-system(2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-xdjq9" podUID="2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c" Oct 31 05:44:13.175000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:44:13.236685 kernel: audit: type=1327 audit(1761889453.175:316): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:44:13.237938 env[1308]: time="2025-10-31T05:44:13.237549182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6jdvb,Uid:749c5f31-df45-44a4-9a60-d28a8f071a0b,Namespace:calico-system,Attempt:0,}" Oct 31 05:44:13.247170 env[1308]: time="2025-10-31T05:44:13.247088706Z" level=error msg="Failed to destroy network for sandbox \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.250777 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4-shm.mount: Deactivated successfully. Oct 31 05:44:13.253019 env[1308]: time="2025-10-31T05:44:13.252927816Z" level=error msg="encountered an error cleaning up failed sandbox \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.253105 env[1308]: time="2025-10-31T05:44:13.253042244Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff9f49d5d-sjjrx,Uid:4433a427-a60f-4547-95ae-ea306784cb66,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.254160 kubelet[2197]: E1031 05:44:13.253472 2197 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.254160 kubelet[2197]: E1031 05:44:13.253610 2197 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-sjjrx" Oct 31 05:44:13.254160 kubelet[2197]: E1031 05:44:13.253662 2197 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-sjjrx" Oct 31 05:44:13.254602 kubelet[2197]: E1031 05:44:13.253755 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7ff9f49d5d-sjjrx_calico-apiserver(4433a427-a60f-4547-95ae-ea306784cb66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7ff9f49d5d-sjjrx_calico-apiserver(4433a427-a60f-4547-95ae-ea306784cb66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-sjjrx" podUID="4433a427-a60f-4547-95ae-ea306784cb66" Oct 31 05:44:13.286149 env[1308]: time="2025-10-31T05:44:13.285814425Z" level=error msg="Failed to destroy network for sandbox \"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.288943 env[1308]: time="2025-10-31T05:44:13.288887124Z" level=error msg="encountered an error cleaning up failed sandbox \"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.289145 env[1308]: time="2025-10-31T05:44:13.288975116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b969d5456-zlxmn,Uid:41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.289488 kubelet[2197]: E1031 05:44:13.289336 2197 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.289488 kubelet[2197]: E1031 05:44:13.289435 2197 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b969d5456-zlxmn" Oct 31 05:44:13.292004 kubelet[2197]: E1031 05:44:13.289487 2197 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b969d5456-zlxmn" Oct 31 05:44:13.292004 kubelet[2197]: E1031 05:44:13.289600 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b969d5456-zlxmn_calico-system(41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b969d5456-zlxmn_calico-system(41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b969d5456-zlxmn" podUID="41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9" Oct 31 05:44:13.389835 env[1308]: time="2025-10-31T05:44:13.389700601Z" level=error msg="Failed to destroy network for sandbox \"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.392130 env[1308]: time="2025-10-31T05:44:13.392074216Z" level=error msg="encountered an error cleaning up failed sandbox \"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.392280 env[1308]: time="2025-10-31T05:44:13.392158908Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6jdvb,Uid:749c5f31-df45-44a4-9a60-d28a8f071a0b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.394404 kubelet[2197]: E1031 05:44:13.393058 2197 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.394404 kubelet[2197]: E1031 05:44:13.393318 2197 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6jdvb" Oct 31 05:44:13.394404 kubelet[2197]: E1031 05:44:13.393398 2197 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6jdvb" Oct 31 05:44:13.398007 kubelet[2197]: E1031 05:44:13.394287 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6jdvb_calico-system(749c5f31-df45-44a4-9a60-d28a8f071a0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6jdvb_calico-system(749c5f31-df45-44a4-9a60-d28a8f071a0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6jdvb" podUID="749c5f31-df45-44a4-9a60-d28a8f071a0b" Oct 31 05:44:13.449511 kubelet[2197]: I1031 05:44:13.449418 2197 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" Oct 31 05:44:13.455093 kubelet[2197]: I1031 05:44:13.454698 2197 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Oct 31 05:44:13.461037 kubelet[2197]: I1031 05:44:13.461002 2197 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Oct 31 05:44:13.465047 env[1308]: time="2025-10-31T05:44:13.464092270Z" level=info msg="StopPodSandbox for \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\"" Oct 31 05:44:13.465425 env[1308]: time="2025-10-31T05:44:13.465386555Z" level=info msg="StopPodSandbox for \"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc\"" Oct 31 05:44:13.466344 env[1308]: time="2025-10-31T05:44:13.465577418Z" level=info msg="StopPodSandbox for \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\"" Oct 31 05:44:13.466484 kubelet[2197]: I1031 05:44:13.466418 2197 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" Oct 31 05:44:13.468634 env[1308]: time="2025-10-31T05:44:13.468583552Z" level=info msg="StopPodSandbox for \"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595\"" Oct 31 05:44:13.473835 kubelet[2197]: I1031 05:44:13.473074 2197 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Oct 31 05:44:13.475913 env[1308]: time="2025-10-31T05:44:13.474251351Z" level=info msg="StopPodSandbox for \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\"" Oct 31 05:44:13.478262 kubelet[2197]: I1031 05:44:13.477998 2197 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" Oct 31 05:44:13.480778 env[1308]: time="2025-10-31T05:44:13.479161628Z" level=info msg="StopPodSandbox for \"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034\"" Oct 31 05:44:13.482940 kubelet[2197]: I1031 05:44:13.482850 2197 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" Oct 31 05:44:13.486144 env[1308]: time="2025-10-31T05:44:13.486041212Z" level=info msg="StopPodSandbox for \"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993\"" Oct 31 05:44:13.490239 kubelet[2197]: I1031 05:44:13.490162 2197 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" Oct 31 05:44:13.493190 env[1308]: time="2025-10-31T05:44:13.491189984Z" level=info msg="StopPodSandbox for \"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d\"" Oct 31 05:44:13.703262 env[1308]: time="2025-10-31T05:44:13.703037594Z" level=error msg="StopPodSandbox for \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\" failed" error="failed to destroy network for sandbox \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.705905 kubelet[2197]: E1031 05:44:13.704478 2197 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Oct 31 05:44:13.707271 env[1308]: time="2025-10-31T05:44:13.707217412Z" level=error msg="StopPodSandbox for \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\" failed" error="failed to destroy network for sandbox \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.707654 kubelet[2197]: E1031 05:44:13.707512 2197 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Oct 31 05:44:13.711992 kubelet[2197]: E1031 05:44:13.705947 2197 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02"} Oct 31 05:44:13.712156 kubelet[2197]: E1031 05:44:13.707634 2197 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1"} Oct 31 05:44:13.712156 kubelet[2197]: E1031 05:44:13.712054 2197 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6f047019-b3ae-41f9-bdae-4d0664c67b92\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 05:44:13.712156 kubelet[2197]: E1031 05:44:13.712093 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6f047019-b3ae-41f9-bdae-4d0664c67b92\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58d798db8c-5nl8j" podUID="6f047019-b3ae-41f9-bdae-4d0664c67b92" Oct 31 05:44:13.712949 kubelet[2197]: E1031 05:44:13.712155 2197 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 05:44:13.712949 kubelet[2197]: E1031 05:44:13.712186 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-xdjq9" podUID="2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c" Oct 31 05:44:13.713267 env[1308]: time="2025-10-31T05:44:13.713179243Z" level=error msg="StopPodSandbox for \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\" failed" error="failed to destroy network for sandbox \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.713689 kubelet[2197]: E1031 05:44:13.713644 2197 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Oct 31 05:44:13.713797 kubelet[2197]: E1031 05:44:13.713695 2197 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4"} Oct 31 05:44:13.713797 kubelet[2197]: E1031 05:44:13.713734 2197 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4433a427-a60f-4547-95ae-ea306784cb66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 05:44:13.713797 kubelet[2197]: E1031 05:44:13.713764 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4433a427-a60f-4547-95ae-ea306784cb66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-sjjrx" podUID="4433a427-a60f-4547-95ae-ea306784cb66" Oct 31 05:44:13.727448 env[1308]: time="2025-10-31T05:44:13.727355906Z" level=error msg="StopPodSandbox for \"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993\" failed" error="failed to destroy network for sandbox \"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.727763 kubelet[2197]: E1031 05:44:13.727699 2197 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" Oct 31 05:44:13.727763 kubelet[2197]: E1031 05:44:13.727754 2197 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993"} Oct 31 05:44:13.727915 kubelet[2197]: E1031 05:44:13.727796 2197 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"749c5f31-df45-44a4-9a60-d28a8f071a0b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 05:44:13.727915 kubelet[2197]: E1031 05:44:13.727827 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"749c5f31-df45-44a4-9a60-d28a8f071a0b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6jdvb" podUID="749c5f31-df45-44a4-9a60-d28a8f071a0b" Oct 31 05:44:13.735806 env[1308]: time="2025-10-31T05:44:13.735739877Z" level=error msg="StopPodSandbox for \"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d\" failed" error="failed to destroy network for sandbox \"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.736085 kubelet[2197]: E1031 05:44:13.735995 2197 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" Oct 31 05:44:13.736170 kubelet[2197]: E1031 05:44:13.736084 2197 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d"} Oct 31 05:44:13.736170 kubelet[2197]: E1031 05:44:13.736147 2197 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d36a9e7d-9b1c-4050-ab86-4f0f608f5584\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 05:44:13.737007 kubelet[2197]: E1031 05:44:13.736180 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d36a9e7d-9b1c-4050-ab86-4f0f608f5584\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4vc6k" podUID="d36a9e7d-9b1c-4050-ab86-4f0f608f5584" Oct 31 05:44:13.756739 env[1308]: time="2025-10-31T05:44:13.756638403Z" level=error msg="StopPodSandbox for \"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595\" failed" error="failed to destroy network for sandbox \"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.757819 kubelet[2197]: E1031 05:44:13.757471 2197 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" Oct 31 05:44:13.757819 kubelet[2197]: E1031 05:44:13.757611 2197 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595"} Oct 31 05:44:13.757819 kubelet[2197]: E1031 05:44:13.757690 2197 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 05:44:13.757819 kubelet[2197]: E1031 05:44:13.757740 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b969d5456-zlxmn" podUID="41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9" Oct 31 05:44:13.760243 env[1308]: time="2025-10-31T05:44:13.760153476Z" level=error msg="StopPodSandbox for \"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc\" failed" error="failed to destroy network for sandbox \"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.760614 kubelet[2197]: E1031 05:44:13.760528 2197 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" Oct 31 05:44:13.760786 kubelet[2197]: E1031 05:44:13.760655 2197 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc"} Oct 31 05:44:13.760786 kubelet[2197]: E1031 05:44:13.760730 2197 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ac96e24b-c0dd-48fd-838b-a540fa2a89c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 05:44:13.760951 kubelet[2197]: E1031 05:44:13.760785 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ac96e24b-c0dd-48fd-838b-a540fa2a89c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-dq5pc" podUID="ac96e24b-c0dd-48fd-838b-a540fa2a89c0" Oct 31 05:44:13.767976 env[1308]: time="2025-10-31T05:44:13.767887043Z" level=error msg="StopPodSandbox for \"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034\" failed" error="failed to destroy network for sandbox \"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:13.768424 kubelet[2197]: E1031 05:44:13.768301 2197 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" Oct 31 05:44:13.768521 kubelet[2197]: E1031 05:44:13.768448 2197 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034"} Oct 31 05:44:13.768674 kubelet[2197]: E1031 05:44:13.768525 2197 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"755f8c0d-5dbb-4026-80b5-87b3cb17189f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 05:44:13.768775 kubelet[2197]: E1031 05:44:13.768612 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"755f8c0d-5dbb-4026-80b5-87b3cb17189f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-sw8r9" podUID="755f8c0d-5dbb-4026-80b5-87b3cb17189f" Oct 31 05:44:14.156849 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993-shm.mount: Deactivated successfully. Oct 31 05:44:14.157149 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595-shm.mount: Deactivated successfully. Oct 31 05:44:24.759283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1776159356.mount: Deactivated successfully. Oct 31 05:44:24.802712 env[1308]: time="2025-10-31T05:44:24.802631445Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:44:24.805857 env[1308]: time="2025-10-31T05:44:24.805809262Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:44:24.808444 env[1308]: time="2025-10-31T05:44:24.808408195Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:44:24.810995 env[1308]: time="2025-10-31T05:44:24.810944663Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 31 05:44:24.812013 env[1308]: time="2025-10-31T05:44:24.811962286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 31 05:44:24.862450 env[1308]: time="2025-10-31T05:44:24.862390043Z" level=info msg="CreateContainer within sandbox \"b7d7075b94f42aa3cd1353035b4285bab0574232f877cfce584d3eefc463f383\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 31 05:44:24.892145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1425750967.mount: Deactivated successfully. Oct 31 05:44:24.899875 env[1308]: time="2025-10-31T05:44:24.899816815Z" level=info msg="CreateContainer within sandbox \"b7d7075b94f42aa3cd1353035b4285bab0574232f877cfce584d3eefc463f383\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c0733a146273318292491838c0ff78b61e5f8f0a0a049924c2c2b81c5383735e\"" Oct 31 05:44:24.902811 env[1308]: time="2025-10-31T05:44:24.902773889Z" level=info msg="StartContainer for \"c0733a146273318292491838c0ff78b61e5f8f0a0a049924c2c2b81c5383735e\"" Oct 31 05:44:24.993479 env[1308]: time="2025-10-31T05:44:24.993291824Z" level=info msg="StartContainer for \"c0733a146273318292491838c0ff78b61e5f8f0a0a049924c2c2b81c5383735e\" returns successfully" Oct 31 05:44:25.227484 env[1308]: time="2025-10-31T05:44:25.227312773Z" level=info msg="StopPodSandbox for \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\"" Oct 31 05:44:25.229652 env[1308]: time="2025-10-31T05:44:25.229584261Z" level=info msg="StopPodSandbox for \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\"" Oct 31 05:44:25.231624 env[1308]: time="2025-10-31T05:44:25.231041168Z" level=info msg="StopPodSandbox for \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\"" Oct 31 05:44:25.341894 env[1308]: time="2025-10-31T05:44:25.341806526Z" level=error msg="StopPodSandbox for \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\" failed" error="failed to destroy network for sandbox \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:25.343262 kubelet[2197]: E1031 05:44:25.342993 2197 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Oct 31 05:44:25.343262 kubelet[2197]: E1031 05:44:25.343087 2197 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4"} Oct 31 05:44:25.343262 kubelet[2197]: E1031 05:44:25.343145 2197 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4433a427-a60f-4547-95ae-ea306784cb66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 05:44:25.343262 kubelet[2197]: E1031 05:44:25.343182 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4433a427-a60f-4547-95ae-ea306784cb66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-sjjrx" podUID="4433a427-a60f-4547-95ae-ea306784cb66" Oct 31 05:44:25.351652 env[1308]: time="2025-10-31T05:44:25.351527797Z" level=error msg="StopPodSandbox for \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\" failed" error="failed to destroy network for sandbox \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:25.352266 kubelet[2197]: E1031 05:44:25.352021 2197 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Oct 31 05:44:25.352266 kubelet[2197]: E1031 05:44:25.352103 2197 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1"} Oct 31 05:44:25.352266 kubelet[2197]: E1031 05:44:25.352168 2197 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6f047019-b3ae-41f9-bdae-4d0664c67b92\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 05:44:25.352266 kubelet[2197]: E1031 05:44:25.352202 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6f047019-b3ae-41f9-bdae-4d0664c67b92\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58d798db8c-5nl8j" podUID="6f047019-b3ae-41f9-bdae-4d0664c67b92" Oct 31 05:44:25.369101 env[1308]: time="2025-10-31T05:44:25.369030248Z" level=error msg="StopPodSandbox for \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\" failed" error="failed to destroy network for sandbox \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 05:44:25.369694 kubelet[2197]: E1031 05:44:25.369448 2197 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Oct 31 05:44:25.369694 kubelet[2197]: E1031 05:44:25.369515 2197 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02"} Oct 31 05:44:25.369694 kubelet[2197]: E1031 05:44:25.369581 2197 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 05:44:25.369694 kubelet[2197]: E1031 05:44:25.369614 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-xdjq9" podUID="2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c" Oct 31 05:44:25.499908 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 31 05:44:25.500929 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 31 05:44:25.816616 kubelet[2197]: I1031 05:44:25.813567 2197 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hq9tg" podStartSLOduration=1.544325872 podStartE2EDuration="26.810940544s" podCreationTimestamp="2025-10-31 05:43:59 +0000 UTC" firstStartedPulling="2025-10-31 05:43:59.54738819 +0000 UTC m=+24.661305415" lastFinishedPulling="2025-10-31 05:44:24.81400286 +0000 UTC m=+49.927920087" observedRunningTime="2025-10-31 05:44:25.57668818 +0000 UTC m=+50.690605419" watchObservedRunningTime="2025-10-31 05:44:25.810940544 +0000 UTC m=+50.924857775" Oct 31 05:44:25.832625 env[1308]: time="2025-10-31T05:44:25.831987463Z" level=info msg="StopPodSandbox for \"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595\"" Oct 31 05:44:26.227413 env[1308]: time="2025-10-31T05:44:26.226446864Z" level=info msg="StopPodSandbox for \"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993\"" Oct 31 05:44:26.229068 env[1308]: time="2025-10-31T05:44:26.228284450Z" level=info msg="StopPodSandbox for \"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d\"" Oct 31 05:44:26.280310 env[1308]: 2025-10-31 05:44:26.003 [INFO][3330] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" Oct 31 05:44:26.280310 env[1308]: 2025-10-31 05:44:26.004 [INFO][3330] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" iface="eth0" netns="/var/run/netns/cni-ac3dc29f-a4f5-cea4-56c8-e5a179c03515" Oct 31 05:44:26.280310 env[1308]: 2025-10-31 05:44:26.004 [INFO][3330] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" iface="eth0" netns="/var/run/netns/cni-ac3dc29f-a4f5-cea4-56c8-e5a179c03515" Oct 31 05:44:26.280310 env[1308]: 2025-10-31 05:44:26.006 [INFO][3330] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" iface="eth0" netns="/var/run/netns/cni-ac3dc29f-a4f5-cea4-56c8-e5a179c03515" Oct 31 05:44:26.280310 env[1308]: 2025-10-31 05:44:26.006 [INFO][3330] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" Oct 31 05:44:26.280310 env[1308]: 2025-10-31 05:44:26.006 [INFO][3330] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" Oct 31 05:44:26.280310 env[1308]: 2025-10-31 05:44:26.217 [INFO][3341] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" HandleID="k8s-pod-network.df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" Workload="srv--f2mor.gb1.brightbox.com-k8s-whisker--7b969d5456--zlxmn-eth0" Oct 31 05:44:26.280310 env[1308]: 2025-10-31 05:44:26.219 [INFO][3341] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:26.280310 env[1308]: 2025-10-31 05:44:26.219 [INFO][3341] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:26.280310 env[1308]: 2025-10-31 05:44:26.259 [WARNING][3341] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" HandleID="k8s-pod-network.df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" Workload="srv--f2mor.gb1.brightbox.com-k8s-whisker--7b969d5456--zlxmn-eth0" Oct 31 05:44:26.280310 env[1308]: 2025-10-31 05:44:26.259 [INFO][3341] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" HandleID="k8s-pod-network.df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" Workload="srv--f2mor.gb1.brightbox.com-k8s-whisker--7b969d5456--zlxmn-eth0" Oct 31 05:44:26.280310 env[1308]: 2025-10-31 05:44:26.265 [INFO][3341] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:26.280310 env[1308]: 2025-10-31 05:44:26.271 [INFO][3330] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" Oct 31 05:44:26.288818 systemd[1]: run-netns-cni\x2dac3dc29f\x2da4f5\x2dcea4\x2d56c8\x2de5a179c03515.mount: Deactivated successfully. Oct 31 05:44:26.294682 env[1308]: time="2025-10-31T05:44:26.294626146Z" level=info msg="TearDown network for sandbox \"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595\" successfully" Oct 31 05:44:26.294895 env[1308]: time="2025-10-31T05:44:26.294860018Z" level=info msg="StopPodSandbox for \"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595\" returns successfully" Oct 31 05:44:26.411949 kubelet[2197]: I1031 05:44:26.411802 2197 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9-whisker-backend-key-pair\") pod \"41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9\" (UID: \"41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9\") " Oct 31 05:44:26.413311 kubelet[2197]: I1031 05:44:26.413118 2197 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9-whisker-ca-bundle\") pod \"41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9\" (UID: \"41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9\") " Oct 31 05:44:26.413311 kubelet[2197]: I1031 05:44:26.413188 2197 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7vt9\" (UniqueName: \"kubernetes.io/projected/41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9-kube-api-access-z7vt9\") pod \"41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9\" (UID: \"41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9\") " Oct 31 05:44:26.430070 kubelet[2197]: I1031 05:44:26.429994 2197 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9" (UID: "41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 05:44:26.439291 systemd[1]: var-lib-kubelet-pods-41b7ab9e\x2d79a5\x2d4dde\x2d9f1e\x2dfbd1786e1ae9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz7vt9.mount: Deactivated successfully. Oct 31 05:44:26.455802 systemd[1]: var-lib-kubelet-pods-41b7ab9e\x2d79a5\x2d4dde\x2d9f1e\x2dfbd1786e1ae9-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 31 05:44:26.457961 kubelet[2197]: I1031 05:44:26.457914 2197 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9" (UID: "41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 05:44:26.478896 kubelet[2197]: I1031 05:44:26.477475 2197 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9-kube-api-access-z7vt9" (OuterVolumeSpecName: "kube-api-access-z7vt9") pod "41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9" (UID: "41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9"). InnerVolumeSpecName "kube-api-access-z7vt9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 05:44:26.513945 kubelet[2197]: I1031 05:44:26.513831 2197 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9-whisker-backend-key-pair\") on node \"srv-f2mor.gb1.brightbox.com\" DevicePath \"\"" Oct 31 05:44:26.513945 kubelet[2197]: I1031 05:44:26.513876 2197 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9-whisker-ca-bundle\") on node \"srv-f2mor.gb1.brightbox.com\" DevicePath \"\"" Oct 31 05:44:26.513945 kubelet[2197]: I1031 05:44:26.513896 2197 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z7vt9\" (UniqueName: \"kubernetes.io/projected/41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9-kube-api-access-z7vt9\") on node \"srv-f2mor.gb1.brightbox.com\" DevicePath \"\"" Oct 31 05:44:26.623879 env[1308]: 2025-10-31 05:44:26.445 [INFO][3362] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" Oct 31 05:44:26.623879 env[1308]: 2025-10-31 05:44:26.446 [INFO][3362] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" iface="eth0" netns="/var/run/netns/cni-008a60e0-7538-f71d-e35c-e24fe7d401e5" Oct 31 05:44:26.623879 env[1308]: 2025-10-31 05:44:26.448 [INFO][3362] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" iface="eth0" netns="/var/run/netns/cni-008a60e0-7538-f71d-e35c-e24fe7d401e5" Oct 31 05:44:26.623879 env[1308]: 2025-10-31 05:44:26.463 [INFO][3362] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" iface="eth0" netns="/var/run/netns/cni-008a60e0-7538-f71d-e35c-e24fe7d401e5" Oct 31 05:44:26.623879 env[1308]: 2025-10-31 05:44:26.464 [INFO][3362] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" Oct 31 05:44:26.623879 env[1308]: 2025-10-31 05:44:26.464 [INFO][3362] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" Oct 31 05:44:26.623879 env[1308]: 2025-10-31 05:44:26.568 [INFO][3381] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" HandleID="k8s-pod-network.0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" Workload="srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0" Oct 31 05:44:26.623879 env[1308]: 2025-10-31 05:44:26.568 [INFO][3381] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:26.623879 env[1308]: 2025-10-31 05:44:26.568 [INFO][3381] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:26.623879 env[1308]: 2025-10-31 05:44:26.585 [WARNING][3381] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" HandleID="k8s-pod-network.0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" Workload="srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0" Oct 31 05:44:26.623879 env[1308]: 2025-10-31 05:44:26.586 [INFO][3381] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" HandleID="k8s-pod-network.0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" Workload="srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0" Oct 31 05:44:26.623879 env[1308]: 2025-10-31 05:44:26.598 [INFO][3381] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:26.623879 env[1308]: 2025-10-31 05:44:26.621 [INFO][3362] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" Oct 31 05:44:26.630392 systemd[1]: run-netns-cni\x2d008a60e0\x2d7538\x2df71d\x2de35c\x2de24fe7d401e5.mount: Deactivated successfully. Oct 31 05:44:26.637996 env[1308]: time="2025-10-31T05:44:26.637820915Z" level=info msg="TearDown network for sandbox \"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993\" successfully" Oct 31 05:44:26.638383 env[1308]: time="2025-10-31T05:44:26.637876575Z" level=info msg="StopPodSandbox for \"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993\" returns successfully" Oct 31 05:44:26.647904 env[1308]: time="2025-10-31T05:44:26.647839463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6jdvb,Uid:749c5f31-df45-44a4-9a60-d28a8f071a0b,Namespace:calico-system,Attempt:1,}" Oct 31 05:44:26.782801 env[1308]: 2025-10-31 05:44:26.512 [INFO][3371] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" Oct 31 05:44:26.782801 env[1308]: 2025-10-31 05:44:26.512 [INFO][3371] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" iface="eth0" netns="/var/run/netns/cni-a51d7f67-ebcd-1331-350e-b4a9e03cca11" Oct 31 05:44:26.782801 env[1308]: 2025-10-31 05:44:26.512 [INFO][3371] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" iface="eth0" netns="/var/run/netns/cni-a51d7f67-ebcd-1331-350e-b4a9e03cca11" Oct 31 05:44:26.782801 env[1308]: 2025-10-31 05:44:26.512 [INFO][3371] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" iface="eth0" netns="/var/run/netns/cni-a51d7f67-ebcd-1331-350e-b4a9e03cca11" Oct 31 05:44:26.782801 env[1308]: 2025-10-31 05:44:26.513 [INFO][3371] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" Oct 31 05:44:26.782801 env[1308]: 2025-10-31 05:44:26.513 [INFO][3371] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" Oct 31 05:44:26.782801 env[1308]: 2025-10-31 05:44:26.745 [INFO][3387] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" HandleID="k8s-pod-network.00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0" Oct 31 05:44:26.782801 env[1308]: 2025-10-31 05:44:26.745 [INFO][3387] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:26.782801 env[1308]: 2025-10-31 05:44:26.745 [INFO][3387] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:26.782801 env[1308]: 2025-10-31 05:44:26.770 [WARNING][3387] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" HandleID="k8s-pod-network.00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0" Oct 31 05:44:26.782801 env[1308]: 2025-10-31 05:44:26.770 [INFO][3387] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" HandleID="k8s-pod-network.00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0" Oct 31 05:44:26.782801 env[1308]: 2025-10-31 05:44:26.774 [INFO][3387] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:26.782801 env[1308]: 2025-10-31 05:44:26.780 [INFO][3371] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" Oct 31 05:44:26.787485 systemd[1]: run-netns-cni\x2da51d7f67\x2debcd\x2d1331\x2d350e\x2db4a9e03cca11.mount: Deactivated successfully. Oct 31 05:44:26.790766 env[1308]: time="2025-10-31T05:44:26.790701355Z" level=info msg="TearDown network for sandbox \"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d\" successfully" Oct 31 05:44:26.790923 env[1308]: time="2025-10-31T05:44:26.790888608Z" level=info msg="StopPodSandbox for \"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d\" returns successfully" Oct 31 05:44:26.793019 env[1308]: time="2025-10-31T05:44:26.792979615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4vc6k,Uid:d36a9e7d-9b1c-4050-ab86-4f0f608f5584,Namespace:kube-system,Attempt:1,}" Oct 31 05:44:26.827965 kubelet[2197]: I1031 05:44:26.827842 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/da74ef2c-d536-4fbc-9b28-ba72dfbbfc21-whisker-backend-key-pair\") pod \"whisker-6f447487f8-8md8h\" (UID: \"da74ef2c-d536-4fbc-9b28-ba72dfbbfc21\") " pod="calico-system/whisker-6f447487f8-8md8h" Oct 31 05:44:26.827965 kubelet[2197]: I1031 05:44:26.827922 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da74ef2c-d536-4fbc-9b28-ba72dfbbfc21-whisker-ca-bundle\") pod \"whisker-6f447487f8-8md8h\" (UID: \"da74ef2c-d536-4fbc-9b28-ba72dfbbfc21\") " pod="calico-system/whisker-6f447487f8-8md8h" Oct 31 05:44:26.827965 kubelet[2197]: I1031 05:44:26.827960 2197 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjkfs\" (UniqueName: \"kubernetes.io/projected/da74ef2c-d536-4fbc-9b28-ba72dfbbfc21-kube-api-access-tjkfs\") pod \"whisker-6f447487f8-8md8h\" (UID: \"da74ef2c-d536-4fbc-9b28-ba72dfbbfc21\") " pod="calico-system/whisker-6f447487f8-8md8h" Oct 31 05:44:26.988594 env[1308]: time="2025-10-31T05:44:26.988509316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f447487f8-8md8h,Uid:da74ef2c-d536-4fbc-9b28-ba72dfbbfc21,Namespace:calico-system,Attempt:0,}" Oct 31 05:44:27.090321 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 31 05:44:27.091303 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali09ab8106a20: link becomes ready Oct 31 05:44:27.098269 systemd-networkd[1069]: cali09ab8106a20: Link UP Oct 31 05:44:27.100293 systemd-networkd[1069]: cali09ab8106a20: Gained carrier Oct 31 05:44:27.125744 env[1308]: 2025-10-31 05:44:26.851 [INFO][3409] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 05:44:27.125744 env[1308]: 2025-10-31 05:44:26.882 [INFO][3409] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0 csi-node-driver- calico-system 749c5f31-df45-44a4-9a60-d28a8f071a0b 935 0 2025-10-31 05:43:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-f2mor.gb1.brightbox.com csi-node-driver-6jdvb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali09ab8106a20 [] [] }} ContainerID="27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53" Namespace="calico-system" Pod="csi-node-driver-6jdvb" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-" Oct 31 05:44:27.125744 env[1308]: 2025-10-31 05:44:26.882 [INFO][3409] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53" Namespace="calico-system" Pod="csi-node-driver-6jdvb" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0" Oct 31 05:44:27.125744 env[1308]: 2025-10-31 05:44:26.980 [INFO][3450] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53" HandleID="k8s-pod-network.27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53" Workload="srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0" Oct 31 05:44:27.125744 env[1308]: 2025-10-31 05:44:26.981 [INFO][3450] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53" HandleID="k8s-pod-network.27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53" Workload="srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-f2mor.gb1.brightbox.com", "pod":"csi-node-driver-6jdvb", "timestamp":"2025-10-31 05:44:26.98055224 +0000 UTC"}, Hostname:"srv-f2mor.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 05:44:27.125744 env[1308]: 2025-10-31 05:44:26.981 [INFO][3450] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:27.125744 env[1308]: 2025-10-31 05:44:26.981 [INFO][3450] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:27.125744 env[1308]: 2025-10-31 05:44:26.981 [INFO][3450] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f2mor.gb1.brightbox.com' Oct 31 05:44:27.125744 env[1308]: 2025-10-31 05:44:26.997 [INFO][3450] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.125744 env[1308]: 2025-10-31 05:44:27.012 [INFO][3450] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.125744 env[1308]: 2025-10-31 05:44:27.019 [INFO][3450] ipam/ipam.go 511: Trying affinity for 192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.125744 env[1308]: 2025-10-31 05:44:27.021 [INFO][3450] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.125744 env[1308]: 2025-10-31 05:44:27.024 [INFO][3450] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.125744 env[1308]: 2025-10-31 05:44:27.024 [INFO][3450] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.128/26 handle="k8s-pod-network.27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.125744 env[1308]: 2025-10-31 05:44:27.026 [INFO][3450] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53 Oct 31 05:44:27.125744 env[1308]: 2025-10-31 05:44:27.037 [INFO][3450] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.128/26 handle="k8s-pod-network.27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.125744 env[1308]: 2025-10-31 05:44:27.047 [INFO][3450] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.129/26] block=192.168.24.128/26 handle="k8s-pod-network.27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.125744 env[1308]: 2025-10-31 05:44:27.048 [INFO][3450] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.129/26] handle="k8s-pod-network.27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.125744 env[1308]: 2025-10-31 05:44:27.048 [INFO][3450] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:27.125744 env[1308]: 2025-10-31 05:44:27.048 [INFO][3450] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.129/26] IPv6=[] ContainerID="27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53" HandleID="k8s-pod-network.27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53" Workload="srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0" Oct 31 05:44:27.137946 env[1308]: 2025-10-31 05:44:27.051 [INFO][3409] cni-plugin/k8s.go 418: Populated endpoint ContainerID="27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53" Namespace="calico-system" Pod="csi-node-driver-6jdvb" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"749c5f31-df45-44a4-9a60-d28a8f071a0b", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-6jdvb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali09ab8106a20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:27.137946 env[1308]: 2025-10-31 05:44:27.051 [INFO][3409] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.129/32] ContainerID="27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53" Namespace="calico-system" Pod="csi-node-driver-6jdvb" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0" Oct 31 05:44:27.137946 env[1308]: 2025-10-31 05:44:27.051 [INFO][3409] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09ab8106a20 ContainerID="27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53" Namespace="calico-system" Pod="csi-node-driver-6jdvb" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0" Oct 31 05:44:27.137946 env[1308]: 2025-10-31 05:44:27.103 [INFO][3409] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53" Namespace="calico-system" Pod="csi-node-driver-6jdvb" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0" Oct 31 05:44:27.137946 env[1308]: 2025-10-31 05:44:27.105 [INFO][3409] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53" Namespace="calico-system" Pod="csi-node-driver-6jdvb" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"749c5f31-df45-44a4-9a60-d28a8f071a0b", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53", Pod:"csi-node-driver-6jdvb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali09ab8106a20", MAC:"de:a6:64:91:ec:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:27.137946 env[1308]: 2025-10-31 05:44:27.122 [INFO][3409] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53" Namespace="calico-system" Pod="csi-node-driver-6jdvb" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0" Oct 31 05:44:27.199725 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali36170ee9025: link becomes ready Oct 31 05:44:27.200008 systemd-networkd[1069]: cali36170ee9025: Link UP Oct 31 05:44:27.200994 systemd-networkd[1069]: cali36170ee9025: Gained carrier Oct 31 05:44:27.230587 env[1308]: time="2025-10-31T05:44:27.230373918Z" level=info msg="StopPodSandbox for \"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034\"" Oct 31 05:44:27.242647 kubelet[2197]: I1031 05:44:27.242508 2197 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9" path="/var/lib/kubelet/pods/41b7ab9e-79a5-4dde-9f1e-fbd1786e1ae9/volumes" Oct 31 05:44:27.245893 env[1308]: 2025-10-31 05:44:26.888 [INFO][3431] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 05:44:27.245893 env[1308]: 2025-10-31 05:44:26.931 [INFO][3431] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0 coredns-668d6bf9bc- kube-system d36a9e7d-9b1c-4050-ab86-4f0f608f5584 940 0 2025-10-31 05:43:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-f2mor.gb1.brightbox.com coredns-668d6bf9bc-4vc6k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali36170ee9025 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a" Namespace="kube-system" Pod="coredns-668d6bf9bc-4vc6k" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-" Oct 31 05:44:27.245893 env[1308]: 2025-10-31 05:44:26.931 [INFO][3431] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a" Namespace="kube-system" Pod="coredns-668d6bf9bc-4vc6k" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0" Oct 31 05:44:27.245893 env[1308]: 2025-10-31 05:44:27.053 [INFO][3459] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a" HandleID="k8s-pod-network.ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0" Oct 31 05:44:27.245893 env[1308]: 2025-10-31 05:44:27.055 [INFO][3459] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a" HandleID="k8s-pod-network.ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c9b80), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-f2mor.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-4vc6k", "timestamp":"2025-10-31 05:44:27.053651163 +0000 UTC"}, Hostname:"srv-f2mor.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 05:44:27.245893 env[1308]: 2025-10-31 05:44:27.055 [INFO][3459] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:27.245893 env[1308]: 2025-10-31 05:44:27.055 [INFO][3459] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:27.245893 env[1308]: 2025-10-31 05:44:27.056 [INFO][3459] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f2mor.gb1.brightbox.com' Oct 31 05:44:27.245893 env[1308]: 2025-10-31 05:44:27.103 [INFO][3459] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.245893 env[1308]: 2025-10-31 05:44:27.147 [INFO][3459] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.245893 env[1308]: 2025-10-31 05:44:27.160 [INFO][3459] ipam/ipam.go 511: Trying affinity for 192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.245893 env[1308]: 2025-10-31 05:44:27.164 [INFO][3459] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.245893 env[1308]: 2025-10-31 05:44:27.168 [INFO][3459] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.245893 env[1308]: 2025-10-31 05:44:27.168 [INFO][3459] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.128/26 handle="k8s-pod-network.ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.245893 env[1308]: 2025-10-31 05:44:27.170 [INFO][3459] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a Oct 31 05:44:27.245893 env[1308]: 2025-10-31 05:44:27.180 [INFO][3459] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.128/26 handle="k8s-pod-network.ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.245893 env[1308]: 2025-10-31 05:44:27.188 [INFO][3459] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.130/26] block=192.168.24.128/26 handle="k8s-pod-network.ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.245893 env[1308]: 2025-10-31 05:44:27.188 [INFO][3459] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.130/26] handle="k8s-pod-network.ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.245893 env[1308]: 2025-10-31 05:44:27.188 [INFO][3459] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:27.245893 env[1308]: 2025-10-31 05:44:27.188 [INFO][3459] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.130/26] IPv6=[] ContainerID="ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a" HandleID="k8s-pod-network.ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0" Oct 31 05:44:27.248134 env[1308]: 2025-10-31 05:44:27.193 [INFO][3431] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a" Namespace="kube-system" Pod="coredns-668d6bf9bc-4vc6k" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d36a9e7d-9b1c-4050-ab86-4f0f608f5584", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-4vc6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali36170ee9025", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:27.248134 env[1308]: 2025-10-31 05:44:27.193 [INFO][3431] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.130/32] ContainerID="ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a" Namespace="kube-system" Pod="coredns-668d6bf9bc-4vc6k" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0" Oct 31 05:44:27.248134 env[1308]: 2025-10-31 05:44:27.193 [INFO][3431] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36170ee9025 ContainerID="ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a" Namespace="kube-system" Pod="coredns-668d6bf9bc-4vc6k" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0" Oct 31 05:44:27.248134 env[1308]: 2025-10-31 05:44:27.202 [INFO][3431] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a" Namespace="kube-system" Pod="coredns-668d6bf9bc-4vc6k" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0" Oct 31 05:44:27.248134 env[1308]: 2025-10-31 05:44:27.202 [INFO][3431] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a" Namespace="kube-system" Pod="coredns-668d6bf9bc-4vc6k" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d36a9e7d-9b1c-4050-ab86-4f0f608f5584", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a", Pod:"coredns-668d6bf9bc-4vc6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali36170ee9025", MAC:"02:88:40:1e:40:5e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:27.248134 env[1308]: 2025-10-31 05:44:27.237 [INFO][3431] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a" Namespace="kube-system" Pod="coredns-668d6bf9bc-4vc6k" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0" Oct 31 05:44:27.290804 env[1308]: time="2025-10-31T05:44:27.290580946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 05:44:27.291310 env[1308]: time="2025-10-31T05:44:27.291220520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 05:44:27.291634 env[1308]: time="2025-10-31T05:44:27.291556853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 05:44:27.292197 env[1308]: time="2025-10-31T05:44:27.292147066Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53 pid=3495 runtime=io.containerd.runc.v2 Oct 31 05:44:27.351919 env[1308]: time="2025-10-31T05:44:27.349694527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 05:44:27.351919 env[1308]: time="2025-10-31T05:44:27.349792786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 05:44:27.352200 env[1308]: time="2025-10-31T05:44:27.349813036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 05:44:27.352200 env[1308]: time="2025-10-31T05:44:27.350915379Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a pid=3532 runtime=io.containerd.runc.v2 Oct 31 05:44:27.539630 env[1308]: time="2025-10-31T05:44:27.538898981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4vc6k,Uid:d36a9e7d-9b1c-4050-ab86-4f0f608f5584,Namespace:kube-system,Attempt:1,} returns sandbox id \"ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a\"" Oct 31 05:44:27.550865 env[1308]: time="2025-10-31T05:44:27.550800646Z" level=info msg="CreateContainer within sandbox \"ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 05:44:27.565864 env[1308]: time="2025-10-31T05:44:27.565689567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6jdvb,Uid:749c5f31-df45-44a4-9a60-d28a8f071a0b,Namespace:calico-system,Attempt:1,} returns sandbox id \"27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53\"" Oct 31 05:44:27.568859 env[1308]: time="2025-10-31T05:44:27.568810457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 05:44:27.584278 env[1308]: time="2025-10-31T05:44:27.584210474Z" level=info msg="CreateContainer within sandbox \"ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e412ce912a4075620347742f4045c235d74a1c138eee3430834ab5f86f97ca73\"" Oct 31 05:44:27.585069 env[1308]: time="2025-10-31T05:44:27.585030641Z" level=info msg="StartContainer for \"e412ce912a4075620347742f4045c235d74a1c138eee3430834ab5f86f97ca73\"" Oct 31 05:44:27.702297 systemd-networkd[1069]: calib9af854fba9: Link UP Oct 31 05:44:27.715575 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib9af854fba9: link becomes ready Oct 31 05:44:27.724821 systemd-networkd[1069]: calib9af854fba9: Gained carrier Oct 31 05:44:27.786687 env[1308]: 2025-10-31 05:44:27.178 [INFO][3467] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 05:44:27.786687 env[1308]: 2025-10-31 05:44:27.251 [INFO][3467] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f2mor.gb1.brightbox.com-k8s-whisker--6f447487f8--8md8h-eth0 whisker-6f447487f8- calico-system da74ef2c-d536-4fbc-9b28-ba72dfbbfc21 957 0 2025-10-31 05:44:26 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6f447487f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-f2mor.gb1.brightbox.com whisker-6f447487f8-8md8h eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib9af854fba9 [] [] }} ContainerID="2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff" Namespace="calico-system" Pod="whisker-6f447487f8-8md8h" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-whisker--6f447487f8--8md8h-" Oct 31 05:44:27.786687 env[1308]: 2025-10-31 05:44:27.251 [INFO][3467] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff" Namespace="calico-system" Pod="whisker-6f447487f8-8md8h" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-whisker--6f447487f8--8md8h-eth0" Oct 31 05:44:27.786687 env[1308]: 2025-10-31 05:44:27.462 [INFO][3499] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff" HandleID="k8s-pod-network.2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff" Workload="srv--f2mor.gb1.brightbox.com-k8s-whisker--6f447487f8--8md8h-eth0" Oct 31 05:44:27.786687 env[1308]: 2025-10-31 05:44:27.462 [INFO][3499] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff" HandleID="k8s-pod-network.2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff" Workload="srv--f2mor.gb1.brightbox.com-k8s-whisker--6f447487f8--8md8h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025d2d0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-f2mor.gb1.brightbox.com", "pod":"whisker-6f447487f8-8md8h", "timestamp":"2025-10-31 05:44:27.462163421 +0000 UTC"}, Hostname:"srv-f2mor.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 05:44:27.786687 env[1308]: 2025-10-31 05:44:27.462 [INFO][3499] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:27.786687 env[1308]: 2025-10-31 05:44:27.463 [INFO][3499] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:27.786687 env[1308]: 2025-10-31 05:44:27.463 [INFO][3499] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f2mor.gb1.brightbox.com' Oct 31 05:44:27.786687 env[1308]: 2025-10-31 05:44:27.478 [INFO][3499] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.786687 env[1308]: 2025-10-31 05:44:27.496 [INFO][3499] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.786687 env[1308]: 2025-10-31 05:44:27.509 [INFO][3499] ipam/ipam.go 511: Trying affinity for 192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.786687 env[1308]: 2025-10-31 05:44:27.515 [INFO][3499] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.786687 env[1308]: 2025-10-31 05:44:27.543 [INFO][3499] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.786687 env[1308]: 2025-10-31 05:44:27.543 [INFO][3499] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.128/26 handle="k8s-pod-network.2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.786687 env[1308]: 2025-10-31 05:44:27.546 [INFO][3499] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff Oct 31 05:44:27.786687 env[1308]: 2025-10-31 05:44:27.668 [INFO][3499] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.128/26 handle="k8s-pod-network.2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.786687 env[1308]: 2025-10-31 05:44:27.683 [INFO][3499] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.131/26] block=192.168.24.128/26 handle="k8s-pod-network.2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.786687 env[1308]: 2025-10-31 05:44:27.683 [INFO][3499] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.131/26] handle="k8s-pod-network.2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:27.786687 env[1308]: 2025-10-31 05:44:27.683 [INFO][3499] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:27.786687 env[1308]: 2025-10-31 05:44:27.683 [INFO][3499] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.131/26] IPv6=[] ContainerID="2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff" HandleID="k8s-pod-network.2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff" Workload="srv--f2mor.gb1.brightbox.com-k8s-whisker--6f447487f8--8md8h-eth0" Oct 31 05:44:27.796527 env[1308]: 2025-10-31 05:44:27.687 [INFO][3467] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff" Namespace="calico-system" Pod="whisker-6f447487f8-8md8h" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-whisker--6f447487f8--8md8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-whisker--6f447487f8--8md8h-eth0", GenerateName:"whisker-6f447487f8-", Namespace:"calico-system", SelfLink:"", UID:"da74ef2c-d536-4fbc-9b28-ba72dfbbfc21", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 44, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f447487f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"", Pod:"whisker-6f447487f8-8md8h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.24.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib9af854fba9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:27.796527 env[1308]: 2025-10-31 05:44:27.688 [INFO][3467] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.131/32] ContainerID="2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff" Namespace="calico-system" Pod="whisker-6f447487f8-8md8h" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-whisker--6f447487f8--8md8h-eth0" Oct 31 05:44:27.796527 env[1308]: 2025-10-31 05:44:27.688 [INFO][3467] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9af854fba9 ContainerID="2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff" Namespace="calico-system" Pod="whisker-6f447487f8-8md8h" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-whisker--6f447487f8--8md8h-eth0" Oct 31 05:44:27.796527 env[1308]: 2025-10-31 05:44:27.729 [INFO][3467] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff" Namespace="calico-system" Pod="whisker-6f447487f8-8md8h" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-whisker--6f447487f8--8md8h-eth0" Oct 31 05:44:27.796527 env[1308]: 2025-10-31 05:44:27.729 [INFO][3467] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff" Namespace="calico-system" Pod="whisker-6f447487f8-8md8h" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-whisker--6f447487f8--8md8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-whisker--6f447487f8--8md8h-eth0", GenerateName:"whisker-6f447487f8-", Namespace:"calico-system", SelfLink:"", UID:"da74ef2c-d536-4fbc-9b28-ba72dfbbfc21", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 44, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f447487f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff", Pod:"whisker-6f447487f8-8md8h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.24.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib9af854fba9", MAC:"36:3f:ea:1e:35:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:27.796527 env[1308]: 2025-10-31 05:44:27.747 [INFO][3467] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff" Namespace="calico-system" Pod="whisker-6f447487f8-8md8h" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-whisker--6f447487f8--8md8h-eth0" Oct 31 05:44:27.859571 env[1308]: time="2025-10-31T05:44:27.858774119Z" level=info msg="StartContainer for \"e412ce912a4075620347742f4045c235d74a1c138eee3430834ab5f86f97ca73\" returns successfully" Oct 31 05:44:27.886357 env[1308]: time="2025-10-31T05:44:27.886251362Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:44:27.901819 env[1308]: time="2025-10-31T05:44:27.901508939Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 05:44:27.902147 kubelet[2197]: E1031 05:44:27.902071 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 05:44:27.902791 kubelet[2197]: E1031 05:44:27.902162 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 05:44:27.923616 kernel: audit: type=1400 audit(1761889467.916:317): avc: denied { write } for pid=3711 comm="tee" name="fd" dev="proc" ino=30845 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 05:44:27.916000 audit[3711]: AVC avc: denied { write } for pid=3711 comm="tee" name="fd" dev="proc" ino=30845 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 05:44:27.934245 env[1308]: time="2025-10-31T05:44:27.930802821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 05:44:27.934245 env[1308]: time="2025-10-31T05:44:27.930990648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 05:44:27.934245 env[1308]: time="2025-10-31T05:44:27.933830729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 05:44:27.934616 env[1308]: time="2025-10-31T05:44:27.934236954Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff pid=3706 runtime=io.containerd.runc.v2 Oct 31 05:44:27.934691 kubelet[2197]: E1031 05:44:27.933676 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9rn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6jdvb_calico-system(749c5f31-df45-44a4-9a60-d28a8f071a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 05:44:27.939459 env[1308]: time="2025-10-31T05:44:27.937792027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 05:44:27.946000 audit[3713]: AVC avc: denied { write } for pid=3713 comm="tee" name="fd" dev="proc" ino=30848 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 05:44:27.955588 kernel: audit: type=1400 audit(1761889467.946:318): avc: denied { write } for pid=3713 comm="tee" name="fd" dev="proc" ino=30848 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 05:44:27.978553 kernel: audit: type=1300 audit(1761889467.916:317): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffef692c7bd a2=241 a3=1b6 items=1 ppid=3654 pid=3711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:27.978739 kernel: audit: type=1307 audit(1761889467.916:317): cwd="/etc/service/enabled/confd/log" Oct 31 05:44:27.916000 audit[3711]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffef692c7bd a2=241 a3=1b6 items=1 ppid=3654 pid=3711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:27.990426 kernel: audit: type=1302 audit(1761889467.916:317): item=0 name="/dev/fd/63" inode=30838 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:44:27.916000 audit: CWD cwd="/etc/service/enabled/confd/log" Oct 31 05:44:27.916000 audit: PATH item=0 name="/dev/fd/63" inode=30838 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:44:27.916000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 05:44:28.001658 kernel: audit: type=1327 audit(1761889467.916:317): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 05:44:28.022953 kernel: audit: type=1300 audit(1761889467.946:318): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff415b77bd a2=241 a3=1b6 items=1 ppid=3656 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:27.946000 audit[3713]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff415b77bd a2=241 a3=1b6 items=1 ppid=3656 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:28.035572 kernel: audit: type=1307 audit(1761889467.946:318): cwd="/etc/service/enabled/felix/log" Oct 31 05:44:28.035745 kernel: audit: type=1302 audit(1761889467.946:318): item=0 name="/dev/fd/63" inode=30839 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:44:27.946000 audit: CWD cwd="/etc/service/enabled/felix/log" Oct 31 05:44:27.946000 audit: PATH item=0 name="/dev/fd/63" inode=30839 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:44:28.048196 systemd[1]: run-containerd-runc-k8s.io-2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff-runc.pKv7yi.mount: Deactivated successfully. Oct 31 05:44:28.064780 kernel: audit: type=1327 audit(1761889467.946:318): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 05:44:27.946000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 05:44:27.978000 audit[3718]: AVC avc: denied { write } for pid=3718 comm="tee" name="fd" dev="proc" ino=31902 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 05:44:27.978000 audit[3718]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffedda5f7bf a2=241 a3=1b6 items=1 ppid=3653 pid=3718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:27.978000 audit: CWD cwd="/etc/service/enabled/cni/log" Oct 31 05:44:27.978000 audit: PATH item=0 name="/dev/fd/63" inode=30842 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:44:27.978000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 05:44:27.996000 audit[3725]: AVC avc: denied { write } for pid=3725 comm="tee" name="fd" dev="proc" ino=31916 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 05:44:27.996000 audit[3725]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe472497bd a2=241 a3=1b6 items=1 ppid=3659 pid=3725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:27.996000 audit: CWD cwd="/etc/service/enabled/bird6/log" Oct 31 05:44:27.996000 audit: PATH item=0 name="/dev/fd/63" inode=31890 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:44:27.996000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 05:44:27.995000 audit[3716]: AVC avc: denied { write } for pid=3716 comm="tee" name="fd" dev="proc" ino=31918 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 05:44:27.995000 audit[3716]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc0e8ef7ad a2=241 a3=1b6 items=1 ppid=3657 pid=3716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:27.995000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Oct 31 05:44:27.995000 audit: PATH item=0 name="/dev/fd/63" inode=31884 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:44:27.995000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 05:44:28.021000 audit[3752]: AVC avc: denied { write } for pid=3752 comm="tee" name="fd" dev="proc" ino=30861 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 05:44:28.021000 audit[3752]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc4d01b7ae a2=241 a3=1b6 items=1 ppid=3651 pid=3752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:28.021000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Oct 31 05:44:28.021000 audit: PATH item=0 name="/dev/fd/63" inode=30856 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:44:28.021000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 05:44:28.070000 audit[3758]: AVC avc: denied { write } for pid=3758 comm="tee" name="fd" dev="proc" ino=30868 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Oct 31 05:44:28.070000 audit[3758]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc4339f7be a2=241 a3=1b6 items=1 ppid=3658 pid=3758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:28.070000 audit: CWD cwd="/etc/service/enabled/bird/log" Oct 31 05:44:28.070000 audit: PATH item=0 name="/dev/fd/63" inode=31909 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 31 05:44:28.070000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Oct 31 05:44:28.149186 env[1308]: 2025-10-31 05:44:27.672 [INFO][3529] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" Oct 31 05:44:28.149186 env[1308]: 2025-10-31 05:44:27.672 [INFO][3529] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" iface="eth0" netns="/var/run/netns/cni-16ee5f3e-8670-ed9b-77d0-cbe19dd1748f" Oct 31 05:44:28.149186 env[1308]: 2025-10-31 05:44:27.672 [INFO][3529] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" iface="eth0" netns="/var/run/netns/cni-16ee5f3e-8670-ed9b-77d0-cbe19dd1748f" Oct 31 05:44:28.149186 env[1308]: 2025-10-31 05:44:27.673 [INFO][3529] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" iface="eth0" netns="/var/run/netns/cni-16ee5f3e-8670-ed9b-77d0-cbe19dd1748f" Oct 31 05:44:28.149186 env[1308]: 2025-10-31 05:44:27.673 [INFO][3529] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" Oct 31 05:44:28.149186 env[1308]: 2025-10-31 05:44:27.673 [INFO][3529] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" Oct 31 05:44:28.149186 env[1308]: 2025-10-31 05:44:28.105 [INFO][3655] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" HandleID="k8s-pod-network.6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0" Oct 31 05:44:28.149186 env[1308]: 2025-10-31 05:44:28.117 [INFO][3655] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:28.149186 env[1308]: 2025-10-31 05:44:28.117 [INFO][3655] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:28.149186 env[1308]: 2025-10-31 05:44:28.138 [WARNING][3655] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" HandleID="k8s-pod-network.6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0" Oct 31 05:44:28.149186 env[1308]: 2025-10-31 05:44:28.138 [INFO][3655] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" HandleID="k8s-pod-network.6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0" Oct 31 05:44:28.149186 env[1308]: 2025-10-31 05:44:28.141 [INFO][3655] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:28.149186 env[1308]: 2025-10-31 05:44:28.146 [INFO][3529] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" Oct 31 05:44:28.153909 systemd[1]: run-netns-cni\x2d16ee5f3e\x2d8670\x2ded9b\x2d77d0\x2dcbe19dd1748f.mount: Deactivated successfully. Oct 31 05:44:28.160816 env[1308]: time="2025-10-31T05:44:28.160717321Z" level=info msg="TearDown network for sandbox \"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034\" successfully" Oct 31 05:44:28.161047 env[1308]: time="2025-10-31T05:44:28.161012047Z" level=info msg="StopPodSandbox for \"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034\" returns successfully" Oct 31 05:44:28.164062 env[1308]: time="2025-10-31T05:44:28.164023828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sw8r9,Uid:755f8c0d-5dbb-4026-80b5-87b3cb17189f,Namespace:kube-system,Attempt:1,}" Oct 31 05:44:28.226675 env[1308]: time="2025-10-31T05:44:28.226501694Z" level=info msg="StopPodSandbox for \"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc\"" Oct 31 05:44:28.230740 systemd-networkd[1069]: cali09ab8106a20: Gained IPv6LL Oct 31 05:44:28.273889 env[1308]: time="2025-10-31T05:44:28.273775992Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:44:28.280551 env[1308]: time="2025-10-31T05:44:28.280460567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 05:44:28.281936 kubelet[2197]: E1031 05:44:28.281087 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 05:44:28.281936 kubelet[2197]: E1031 05:44:28.281213 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 05:44:28.281936 kubelet[2197]: E1031 05:44:28.281480 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9rn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6jdvb_calico-system(749c5f31-df45-44a4-9a60-d28a8f071a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 05:44:28.288561 kubelet[2197]: E1031 05:44:28.284367 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6jdvb" podUID="749c5f31-df45-44a4-9a60-d28a8f071a0b" Oct 31 05:44:28.553797 systemd-networkd[1069]: cali36170ee9025: Gained IPv6LL Oct 31 05:44:28.600088 kubelet[2197]: E1031 05:44:28.596678 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6jdvb" podUID="749c5f31-df45-44a4-9a60-d28a8f071a0b" Oct 31 05:44:28.679675 env[1308]: time="2025-10-31T05:44:28.676442592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f447487f8-8md8h,Uid:da74ef2c-d536-4fbc-9b28-ba72dfbbfc21,Namespace:calico-system,Attempt:0,} returns sandbox id \"2911421eaf2fcc7d3f0e737b9529529ff7591556127f7e9de63f721b5e7623ff\"" Oct 31 05:44:28.695523 env[1308]: time="2025-10-31T05:44:28.695442500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 05:44:28.835000 audit[3834]: NETFILTER_CFG table=filter:105 family=2 entries=17 op=nft_register_rule pid=3834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:44:28.860717 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 31 05:44:28.860864 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali80519ea3f0c: link becomes ready Oct 31 05:44:28.835000 audit[3834]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff31fed070 a2=0 a3=7fff31fed05c items=0 ppid=2302 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:28.835000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:44:28.839601 systemd-networkd[1069]: cali80519ea3f0c: Link UP Oct 31 05:44:28.858819 systemd-networkd[1069]: cali80519ea3f0c: Gained carrier Oct 31 05:44:28.872000 audit[3834]: NETFILTER_CFG table=nat:106 family=2 entries=35 op=nft_register_chain pid=3834 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:44:28.872000 audit[3834]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff31fed070 a2=0 a3=7fff31fed05c items=0 ppid=2302 pid=3834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:28.872000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:44:28.920084 env[1308]: 2025-10-31 05:44:28.491 [INFO][3801] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" Oct 31 05:44:28.920084 env[1308]: 2025-10-31 05:44:28.491 [INFO][3801] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" iface="eth0" netns="/var/run/netns/cni-563c5178-3880-729f-245d-c68c60317d3c" Oct 31 05:44:28.920084 env[1308]: 2025-10-31 05:44:28.491 [INFO][3801] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" iface="eth0" netns="/var/run/netns/cni-563c5178-3880-729f-245d-c68c60317d3c" Oct 31 05:44:28.920084 env[1308]: 2025-10-31 05:44:28.492 [INFO][3801] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" iface="eth0" netns="/var/run/netns/cni-563c5178-3880-729f-245d-c68c60317d3c" Oct 31 05:44:28.920084 env[1308]: 2025-10-31 05:44:28.492 [INFO][3801] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" Oct 31 05:44:28.920084 env[1308]: 2025-10-31 05:44:28.492 [INFO][3801] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" Oct 31 05:44:28.920084 env[1308]: 2025-10-31 05:44:28.743 [INFO][3816] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" HandleID="k8s-pod-network.dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0" Oct 31 05:44:28.920084 env[1308]: 2025-10-31 05:44:28.744 [INFO][3816] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:28.920084 env[1308]: 2025-10-31 05:44:28.816 [INFO][3816] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:28.920084 env[1308]: 2025-10-31 05:44:28.889 [WARNING][3816] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" HandleID="k8s-pod-network.dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0" Oct 31 05:44:28.920084 env[1308]: 2025-10-31 05:44:28.889 [INFO][3816] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" HandleID="k8s-pod-network.dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0" Oct 31 05:44:28.920084 env[1308]: 2025-10-31 05:44:28.909 [INFO][3816] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:28.920084 env[1308]: 2025-10-31 05:44:28.912 [INFO][3801] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" Oct 31 05:44:28.920084 env[1308]: time="2025-10-31T05:44:28.918115599Z" level=info msg="TearDown network for sandbox \"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc\" successfully" Oct 31 05:44:28.920084 env[1308]: time="2025-10-31T05:44:28.918168220Z" level=info msg="StopPodSandbox for \"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc\" returns successfully" Oct 31 05:44:28.919085 systemd[1]: run-netns-cni\x2d563c5178\x2d3880\x2d729f\x2d245d\x2dc68c60317d3c.mount: Deactivated successfully. Oct 31 05:44:28.924361 env[1308]: time="2025-10-31T05:44:28.920963335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff9f49d5d-dq5pc,Uid:ac96e24b-c0dd-48fd-838b-a540fa2a89c0,Namespace:calico-apiserver,Attempt:1,}" Oct 31 05:44:28.940152 kubelet[2197]: I1031 05:44:28.939954 2197 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4vc6k" podStartSLOduration=49.939903957 podStartE2EDuration="49.939903957s" podCreationTimestamp="2025-10-31 05:43:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 05:44:28.653312386 +0000 UTC m=+53.767229627" watchObservedRunningTime="2025-10-31 05:44:28.939903957 +0000 UTC m=+54.053821191" Oct 31 05:44:28.949731 env[1308]: 2025-10-31 05:44:28.413 [INFO][3781] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 05:44:28.949731 env[1308]: 2025-10-31 05:44:28.462 [INFO][3781] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0 coredns-668d6bf9bc- kube-system 755f8c0d-5dbb-4026-80b5-87b3cb17189f 971 0 2025-10-31 05:43:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-f2mor.gb1.brightbox.com coredns-668d6bf9bc-sw8r9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali80519ea3f0c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552" Namespace="kube-system" Pod="coredns-668d6bf9bc-sw8r9" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-" Oct 31 05:44:28.949731 env[1308]: 2025-10-31 05:44:28.462 [INFO][3781] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552" Namespace="kube-system" Pod="coredns-668d6bf9bc-sw8r9" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0" Oct 31 05:44:28.949731 env[1308]: 2025-10-31 05:44:28.710 [INFO][3811] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552" HandleID="k8s-pod-network.3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0" Oct 31 05:44:28.949731 env[1308]: 2025-10-31 05:44:28.713 [INFO][3811] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552" HandleID="k8s-pod-network.3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000224cc0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-f2mor.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-sw8r9", "timestamp":"2025-10-31 05:44:28.71028782 +0000 UTC"}, Hostname:"srv-f2mor.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 05:44:28.949731 env[1308]: 2025-10-31 05:44:28.713 [INFO][3811] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:28.949731 env[1308]: 2025-10-31 05:44:28.713 [INFO][3811] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:28.949731 env[1308]: 2025-10-31 05:44:28.714 [INFO][3811] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f2mor.gb1.brightbox.com' Oct 31 05:44:28.949731 env[1308]: 2025-10-31 05:44:28.738 [INFO][3811] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:28.949731 env[1308]: 2025-10-31 05:44:28.750 [INFO][3811] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:28.949731 env[1308]: 2025-10-31 05:44:28.769 [INFO][3811] ipam/ipam.go 511: Trying affinity for 192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:28.949731 env[1308]: 2025-10-31 05:44:28.778 [INFO][3811] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:28.949731 env[1308]: 2025-10-31 05:44:28.782 [INFO][3811] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:28.949731 env[1308]: 2025-10-31 05:44:28.783 [INFO][3811] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.128/26 handle="k8s-pod-network.3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:28.949731 env[1308]: 2025-10-31 05:44:28.793 [INFO][3811] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552 Oct 31 05:44:28.949731 env[1308]: 2025-10-31 05:44:28.805 [INFO][3811] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.128/26 handle="k8s-pod-network.3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:28.949731 env[1308]: 2025-10-31 05:44:28.815 [INFO][3811] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.132/26] block=192.168.24.128/26 handle="k8s-pod-network.3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:28.949731 env[1308]: 2025-10-31 05:44:28.816 [INFO][3811] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.132/26] handle="k8s-pod-network.3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:28.949731 env[1308]: 2025-10-31 05:44:28.816 [INFO][3811] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:28.949731 env[1308]: 2025-10-31 05:44:28.816 [INFO][3811] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.132/26] IPv6=[] ContainerID="3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552" HandleID="k8s-pod-network.3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0" Oct 31 05:44:28.954381 env[1308]: 2025-10-31 05:44:28.820 [INFO][3781] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552" Namespace="kube-system" Pod="coredns-668d6bf9bc-sw8r9" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"755f8c0d-5dbb-4026-80b5-87b3cb17189f", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-sw8r9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80519ea3f0c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:28.954381 env[1308]: 2025-10-31 05:44:28.821 [INFO][3781] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.132/32] ContainerID="3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552" Namespace="kube-system" Pod="coredns-668d6bf9bc-sw8r9" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0" Oct 31 05:44:28.954381 env[1308]: 2025-10-31 05:44:28.821 [INFO][3781] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80519ea3f0c ContainerID="3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552" Namespace="kube-system" Pod="coredns-668d6bf9bc-sw8r9" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0" Oct 31 05:44:28.954381 env[1308]: 2025-10-31 05:44:28.871 [INFO][3781] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552" Namespace="kube-system" Pod="coredns-668d6bf9bc-sw8r9" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0" Oct 31 05:44:28.954381 env[1308]: 2025-10-31 05:44:28.872 [INFO][3781] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552" Namespace="kube-system" Pod="coredns-668d6bf9bc-sw8r9" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"755f8c0d-5dbb-4026-80b5-87b3cb17189f", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552", Pod:"coredns-668d6bf9bc-sw8r9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80519ea3f0c", MAC:"2a:ca:fa:80:85:dd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:28.954381 env[1308]: 2025-10-31 05:44:28.946 [INFO][3781] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552" Namespace="kube-system" Pod="coredns-668d6bf9bc-sw8r9" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0" Oct 31 05:44:29.028626 env[1308]: time="2025-10-31T05:44:29.015385223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 05:44:29.028626 env[1308]: time="2025-10-31T05:44:29.015507812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 05:44:29.028626 env[1308]: time="2025-10-31T05:44:29.015562695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 05:44:29.028626 env[1308]: time="2025-10-31T05:44:29.015974839Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552 pid=3872 runtime=io.containerd.runc.v2 Oct 31 05:44:29.041103 env[1308]: time="2025-10-31T05:44:29.041027511Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:44:29.046436 env[1308]: time="2025-10-31T05:44:29.044567738Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 05:44:29.046613 kubelet[2197]: E1031 05:44:29.045749 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 05:44:29.046613 kubelet[2197]: E1031 05:44:29.045851 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 05:44:29.046613 kubelet[2197]: E1031 05:44:29.046016 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8668490f11434cfabca44ddf284789cf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tjkfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f447487f8-8md8h_calico-system(da74ef2c-d536-4fbc-9b28-ba72dfbbfc21): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 05:44:29.054908 env[1308]: time="2025-10-31T05:44:29.053698990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 05:44:29.093366 systemd-networkd[1069]: calib9af854fba9: Gained IPv6LL Oct 31 05:44:29.222000 audit[3924]: AVC avc: denied { bpf } for pid=3924 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.222000 audit[3924]: AVC avc: denied { bpf } for pid=3924 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.222000 audit[3924]: AVC avc: denied { perfmon } for pid=3924 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.222000 audit[3924]: AVC avc: denied { perfmon } for pid=3924 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.222000 audit[3924]: AVC avc: denied { perfmon } for pid=3924 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.222000 audit[3924]: AVC avc: denied { perfmon } for pid=3924 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.222000 audit[3924]: AVC avc: denied { perfmon } for pid=3924 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.222000 audit[3924]: AVC avc: denied { bpf } for pid=3924 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.222000 audit[3924]: AVC avc: denied { bpf } for pid=3924 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.222000 audit: BPF prog-id=10 op=LOAD Oct 31 05:44:29.222000 audit[3924]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc232e8300 a2=98 a3=1fffffffffffffff items=0 ppid=3660 pid=3924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.222000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 31 05:44:29.225000 audit: BPF prog-id=10 op=UNLOAD Oct 31 05:44:29.226000 audit[3924]: AVC avc: denied { bpf } for pid=3924 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.226000 audit[3924]: AVC avc: denied { bpf } for pid=3924 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.226000 audit[3924]: AVC avc: denied { perfmon } for pid=3924 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.226000 audit[3924]: AVC avc: denied { perfmon } for pid=3924 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.226000 audit[3924]: AVC avc: denied { perfmon } for pid=3924 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.226000 audit[3924]: AVC avc: denied { perfmon } for pid=3924 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.226000 audit[3924]: AVC avc: denied { perfmon } for pid=3924 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.226000 audit[3924]: AVC avc: denied { bpf } for pid=3924 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.226000 audit[3924]: AVC avc: denied { bpf } for pid=3924 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.226000 audit: BPF prog-id=11 op=LOAD Oct 31 05:44:29.226000 audit[3924]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc232e81e0 a2=94 a3=3 items=0 ppid=3660 pid=3924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.226000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 31 05:44:29.237000 audit: BPF prog-id=11 op=UNLOAD Oct 31 05:44:29.237000 audit[3924]: AVC avc: denied { bpf } for pid=3924 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.237000 audit[3924]: AVC avc: denied { bpf } for pid=3924 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.237000 audit[3924]: AVC avc: denied { perfmon } for pid=3924 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.237000 audit[3924]: AVC avc: denied { perfmon } for pid=3924 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.237000 audit[3924]: AVC avc: denied { perfmon } for pid=3924 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.237000 audit[3924]: AVC avc: denied { perfmon } for pid=3924 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.237000 audit[3924]: AVC avc: denied { perfmon } for pid=3924 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.237000 audit[3924]: AVC avc: denied { bpf } for pid=3924 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.237000 audit[3924]: AVC avc: denied { bpf } for pid=3924 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.237000 audit: BPF prog-id=12 op=LOAD Oct 31 05:44:29.237000 audit[3924]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc232e8220 a2=94 a3=7ffc232e8400 items=0 ppid=3660 pid=3924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.237000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 31 05:44:29.237000 audit: BPF prog-id=12 op=UNLOAD Oct 31 05:44:29.237000 audit[3924]: AVC avc: denied { perfmon } for pid=3924 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.237000 audit[3924]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffc232e82f0 a2=50 a3=a000000085 items=0 ppid=3660 pid=3924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.237000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Oct 31 05:44:29.256000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.256000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.256000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.256000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.256000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.256000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.256000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.256000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.256000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.256000 audit: BPF prog-id=13 op=LOAD Oct 31 05:44:29.256000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffea5f26260 a2=98 a3=3 items=0 ppid=3660 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.256000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 05:44:29.260000 audit: BPF prog-id=13 op=UNLOAD Oct 31 05:44:29.273000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.273000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.273000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.273000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.273000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.273000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.273000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.273000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.273000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.273000 audit: BPF prog-id=14 op=LOAD Oct 31 05:44:29.273000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffea5f26050 a2=94 a3=54428f items=0 ppid=3660 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.273000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 05:44:29.276000 audit: BPF prog-id=14 op=UNLOAD Oct 31 05:44:29.276000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.276000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.276000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.276000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.276000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.276000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.276000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.276000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.276000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.276000 audit: BPF prog-id=15 op=LOAD Oct 31 05:44:29.276000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffea5f26080 a2=94 a3=2 items=0 ppid=3660 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.276000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 05:44:29.277000 audit: BPF prog-id=15 op=UNLOAD Oct 31 05:44:29.307648 env[1308]: time="2025-10-31T05:44:29.307524972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sw8r9,Uid:755f8c0d-5dbb-4026-80b5-87b3cb17189f,Namespace:kube-system,Attempt:1,} returns sandbox id \"3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552\"" Oct 31 05:44:29.315146 env[1308]: time="2025-10-31T05:44:29.315097790Z" level=info msg="CreateContainer within sandbox \"3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 05:44:29.332464 env[1308]: time="2025-10-31T05:44:29.332392642Z" level=info msg="CreateContainer within sandbox \"3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6a7ffd7ce07451c7c6ebcddd375b1424a5ac7829857ff4fab14068a1c0901b2a\"" Oct 31 05:44:29.334008 env[1308]: time="2025-10-31T05:44:29.333971589Z" level=info msg="StartContainer for \"6a7ffd7ce07451c7c6ebcddd375b1424a5ac7829857ff4fab14068a1c0901b2a\"" Oct 31 05:44:29.407590 env[1308]: time="2025-10-31T05:44:29.407501850Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:44:29.417811 env[1308]: time="2025-10-31T05:44:29.417630256Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 05:44:29.419256 kubelet[2197]: E1031 05:44:29.418394 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 05:44:29.419256 kubelet[2197]: E1031 05:44:29.418491 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 05:44:29.419256 kubelet[2197]: E1031 05:44:29.418833 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjkfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f447487f8-8md8h_calico-system(da74ef2c-d536-4fbc-9b28-ba72dfbbfc21): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 05:44:29.420527 kubelet[2197]: E1031 05:44:29.420411 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f447487f8-8md8h" podUID="da74ef2c-d536-4fbc-9b28-ba72dfbbfc21" Oct 31 05:44:29.503245 systemd-networkd[1069]: calidd85143cbb7: Link UP Oct 31 05:44:29.506814 systemd-networkd[1069]: calidd85143cbb7: Gained carrier Oct 31 05:44:29.507581 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calidd85143cbb7: link becomes ready Oct 31 05:44:29.538702 env[1308]: 2025-10-31 05:44:29.184 [INFO][3855] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0 calico-apiserver-7ff9f49d5d- calico-apiserver ac96e24b-c0dd-48fd-838b-a540fa2a89c0 985 0 2025-10-31 05:43:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7ff9f49d5d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-f2mor.gb1.brightbox.com calico-apiserver-7ff9f49d5d-dq5pc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidd85143cbb7 [] [] }} ContainerID="6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce" Namespace="calico-apiserver" Pod="calico-apiserver-7ff9f49d5d-dq5pc" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-" Oct 31 05:44:29.538702 env[1308]: 2025-10-31 05:44:29.185 [INFO][3855] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce" Namespace="calico-apiserver" Pod="calico-apiserver-7ff9f49d5d-dq5pc" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0" Oct 31 05:44:29.538702 env[1308]: 2025-10-31 05:44:29.359 [INFO][3920] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce" HandleID="k8s-pod-network.6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0" Oct 31 05:44:29.538702 env[1308]: 2025-10-31 05:44:29.359 [INFO][3920] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce" HandleID="k8s-pod-network.6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f430), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-f2mor.gb1.brightbox.com", "pod":"calico-apiserver-7ff9f49d5d-dq5pc", "timestamp":"2025-10-31 05:44:29.359258667 +0000 UTC"}, Hostname:"srv-f2mor.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 05:44:29.538702 env[1308]: 2025-10-31 05:44:29.359 [INFO][3920] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:29.538702 env[1308]: 2025-10-31 05:44:29.360 [INFO][3920] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:29.538702 env[1308]: 2025-10-31 05:44:29.360 [INFO][3920] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f2mor.gb1.brightbox.com' Oct 31 05:44:29.538702 env[1308]: 2025-10-31 05:44:29.371 [INFO][3920] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:29.538702 env[1308]: 2025-10-31 05:44:29.380 [INFO][3920] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:29.538702 env[1308]: 2025-10-31 05:44:29.387 [INFO][3920] ipam/ipam.go 511: Trying affinity for 192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:29.538702 env[1308]: 2025-10-31 05:44:29.392 [INFO][3920] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:29.538702 env[1308]: 2025-10-31 05:44:29.397 [INFO][3920] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:29.538702 env[1308]: 2025-10-31 05:44:29.398 [INFO][3920] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.128/26 handle="k8s-pod-network.6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:29.538702 env[1308]: 2025-10-31 05:44:29.401 [INFO][3920] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce Oct 31 05:44:29.538702 env[1308]: 2025-10-31 05:44:29.413 [INFO][3920] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.128/26 handle="k8s-pod-network.6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:29.538702 env[1308]: 2025-10-31 05:44:29.440 [INFO][3920] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.133/26] block=192.168.24.128/26 handle="k8s-pod-network.6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:29.538702 env[1308]: 2025-10-31 05:44:29.440 [INFO][3920] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.133/26] handle="k8s-pod-network.6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:29.538702 env[1308]: 2025-10-31 05:44:29.441 [INFO][3920] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:29.538702 env[1308]: 2025-10-31 05:44:29.442 [INFO][3920] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.133/26] IPv6=[] ContainerID="6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce" HandleID="k8s-pod-network.6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0" Oct 31 05:44:29.540043 env[1308]: 2025-10-31 05:44:29.459 [INFO][3855] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce" Namespace="calico-apiserver" Pod="calico-apiserver-7ff9f49d5d-dq5pc" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0", GenerateName:"calico-apiserver-7ff9f49d5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ac96e24b-c0dd-48fd-838b-a540fa2a89c0", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ff9f49d5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-7ff9f49d5d-dq5pc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd85143cbb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:29.540043 env[1308]: 2025-10-31 05:44:29.459 [INFO][3855] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.133/32] ContainerID="6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce" Namespace="calico-apiserver" Pod="calico-apiserver-7ff9f49d5d-dq5pc" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0" Oct 31 05:44:29.540043 env[1308]: 2025-10-31 05:44:29.459 [INFO][3855] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd85143cbb7 ContainerID="6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce" Namespace="calico-apiserver" Pod="calico-apiserver-7ff9f49d5d-dq5pc" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0" Oct 31 05:44:29.540043 env[1308]: 2025-10-31 05:44:29.510 [INFO][3855] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce" Namespace="calico-apiserver" Pod="calico-apiserver-7ff9f49d5d-dq5pc" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0" Oct 31 05:44:29.540043 env[1308]: 2025-10-31 05:44:29.511 [INFO][3855] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce" Namespace="calico-apiserver" Pod="calico-apiserver-7ff9f49d5d-dq5pc" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0", GenerateName:"calico-apiserver-7ff9f49d5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ac96e24b-c0dd-48fd-838b-a540fa2a89c0", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ff9f49d5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce", Pod:"calico-apiserver-7ff9f49d5d-dq5pc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd85143cbb7", MAC:"b2:d1:95:b6:15:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:29.540043 env[1308]: 2025-10-31 05:44:29.530 [INFO][3855] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce" Namespace="calico-apiserver" Pod="calico-apiserver-7ff9f49d5d-dq5pc" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0" Oct 31 05:44:29.549719 env[1308]: time="2025-10-31T05:44:29.549656056Z" level=info msg="StartContainer for \"6a7ffd7ce07451c7c6ebcddd375b1424a5ac7829857ff4fab14068a1c0901b2a\" returns successfully" Oct 31 05:44:29.575038 env[1308]: time="2025-10-31T05:44:29.574918478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 05:44:29.575428 env[1308]: time="2025-10-31T05:44:29.575357149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 05:44:29.575642 env[1308]: time="2025-10-31T05:44:29.575588037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 05:44:29.576110 env[1308]: time="2025-10-31T05:44:29.576059067Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce pid=3983 runtime=io.containerd.runc.v2 Oct 31 05:44:29.631117 kubelet[2197]: E1031 05:44:29.630984 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f447487f8-8md8h" podUID="da74ef2c-d536-4fbc-9b28-ba72dfbbfc21" Oct 31 05:44:29.631807 kubelet[2197]: E1031 05:44:29.631745 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6jdvb" podUID="749c5f31-df45-44a4-9a60-d28a8f071a0b" Oct 31 05:44:29.639000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.639000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.639000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.639000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.639000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.639000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.639000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.639000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.639000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.639000 audit: BPF prog-id=16 op=LOAD Oct 31 05:44:29.639000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffea5f25f40 a2=94 a3=1 items=0 ppid=3660 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.639000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 05:44:29.640000 audit: BPF prog-id=16 op=UNLOAD Oct 31 05:44:29.640000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.640000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffea5f26010 a2=50 a3=7ffea5f260f0 items=0 ppid=3660 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.640000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 05:44:29.663000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.663000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffea5f25f50 a2=28 a3=0 items=0 ppid=3660 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.663000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 05:44:29.666000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.666000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffea5f25f80 a2=28 a3=0 items=0 ppid=3660 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.666000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 05:44:29.667000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.667000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffea5f25e90 a2=28 a3=0 items=0 ppid=3660 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.667000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 05:44:29.667000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.667000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffea5f25fa0 a2=28 a3=0 items=0 ppid=3660 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.667000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 05:44:29.669000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.669000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffea5f25f80 a2=28 a3=0 items=0 ppid=3660 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.669000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 05:44:29.670000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.670000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffea5f25f70 a2=28 a3=0 items=0 ppid=3660 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.670000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 05:44:29.671000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.671000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffea5f25fa0 a2=28 a3=0 items=0 ppid=3660 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.671000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 05:44:29.673000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.673000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffea5f25f80 a2=28 a3=0 items=0 ppid=3660 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.673000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 05:44:29.673000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.673000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffea5f25fa0 a2=28 a3=0 items=0 ppid=3660 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.673000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 05:44:29.673000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.673000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffea5f25f70 a2=28 a3=0 items=0 ppid=3660 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.673000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 05:44:29.674000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.674000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffea5f25fe0 a2=28 a3=0 items=0 ppid=3660 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.674000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 05:44:29.675000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.675000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffea5f25d90 a2=50 a3=1 items=0 ppid=3660 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.675000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 05:44:29.676000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.676000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.676000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.676000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.676000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.676000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.676000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.676000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.681182 kubelet[2197]: I1031 05:44:29.681009 2197 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sw8r9" podStartSLOduration=50.680990783 podStartE2EDuration="50.680990783s" podCreationTimestamp="2025-10-31 05:43:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 05:44:29.653717312 +0000 UTC m=+54.767634546" watchObservedRunningTime="2025-10-31 05:44:29.680990783 +0000 UTC m=+54.794908015" Oct 31 05:44:29.676000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.676000 audit: BPF prog-id=17 op=LOAD Oct 31 05:44:29.676000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffea5f25d90 a2=94 a3=5 items=0 ppid=3660 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 05:44:29.693000 audit: BPF prog-id=17 op=UNLOAD Oct 31 05:44:29.696000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.696000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffea5f25e40 a2=50 a3=1 items=0 ppid=3660 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.696000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 05:44:29.696000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.696000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffea5f25f60 a2=4 a3=38 items=0 ppid=3660 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.696000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 05:44:29.698000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.698000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.698000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.698000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.698000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.698000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.698000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.698000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.698000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.698000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.698000 audit[3925]: AVC avc: denied { confidentiality } for pid=3925 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 05:44:29.698000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffea5f25fb0 a2=94 a3=6 items=0 ppid=3660 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.698000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 05:44:29.700000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.700000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.700000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.700000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.700000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.700000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.700000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.700000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.700000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.700000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.700000 audit[3925]: AVC avc: denied { confidentiality } for pid=3925 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 05:44:29.700000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffea5f25760 a2=94 a3=88 items=0 ppid=3660 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.700000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 05:44:29.702000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.702000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.702000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.702000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.702000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.702000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.702000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.702000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.702000 audit[3925]: AVC avc: denied { perfmon } for pid=3925 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.702000 audit[3925]: AVC avc: denied { bpf } for pid=3925 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.702000 audit[3925]: AVC avc: denied { confidentiality } for pid=3925 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 05:44:29.702000 audit[3925]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffea5f25760 a2=94 a3=88 items=0 ppid=3660 pid=3925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.702000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Oct 31 05:44:29.729000 audit[4009]: NETFILTER_CFG table=filter:107 family=2 entries=14 op=nft_register_rule pid=4009 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:44:29.729000 audit[4009]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd28b3f130 a2=0 a3=7ffd28b3f11c items=0 ppid=2302 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.729000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:44:29.736000 audit[4009]: NETFILTER_CFG table=nat:108 family=2 entries=44 op=nft_register_rule pid=4009 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:44:29.736000 audit[4009]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd28b3f130 a2=0 a3=7ffd28b3f11c items=0 ppid=2302 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.736000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:44:29.769000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.769000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.769000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.769000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.769000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.769000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.769000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.769000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.769000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.769000 audit: BPF prog-id=18 op=LOAD Oct 31 05:44:29.769000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcfbd83280 a2=98 a3=1999999999999999 items=0 ppid=3660 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.769000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Oct 31 05:44:29.777000 audit: BPF prog-id=18 op=UNLOAD Oct 31 05:44:29.777000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.777000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.777000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.777000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.777000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.777000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.777000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.777000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.777000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.777000 audit: BPF prog-id=19 op=LOAD Oct 31 05:44:29.777000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcfbd83160 a2=94 a3=ffff items=0 ppid=3660 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.777000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Oct 31 05:44:29.779000 audit: BPF prog-id=19 op=UNLOAD Oct 31 05:44:29.779000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.779000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.779000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.779000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.779000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.779000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.779000 audit[4017]: AVC avc: denied { perfmon } for pid=4017 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.779000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.779000 audit[4017]: AVC avc: denied { bpf } for pid=4017 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.779000 audit: BPF prog-id=20 op=LOAD Oct 31 05:44:29.779000 audit[4017]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcfbd831a0 a2=94 a3=7ffcfbd83380 items=0 ppid=3660 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.779000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Oct 31 05:44:29.780000 audit: BPF prog-id=20 op=UNLOAD Oct 31 05:44:29.785000 audit[4018]: NETFILTER_CFG table=filter:109 family=2 entries=14 op=nft_register_rule pid=4018 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:44:29.785000 audit[4018]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe6ea2d170 a2=0 a3=7ffe6ea2d15c items=0 ppid=2302 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.785000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:44:29.789000 audit[4018]: NETFILTER_CFG table=nat:110 family=2 entries=20 op=nft_register_rule pid=4018 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:44:29.789000 audit[4018]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe6ea2d170 a2=0 a3=7ffe6ea2d15c items=0 ppid=2302 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.789000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:44:29.845553 env[1308]: time="2025-10-31T05:44:29.845471180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff9f49d5d-dq5pc,Uid:ac96e24b-c0dd-48fd-838b-a540fa2a89c0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce\"" Oct 31 05:44:29.849678 env[1308]: time="2025-10-31T05:44:29.849639516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 05:44:29.946208 systemd-networkd[1069]: vxlan.calico: Link UP Oct 31 05:44:29.946220 systemd-networkd[1069]: vxlan.calico: Gained carrier Oct 31 05:44:29.994000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.994000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.994000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.994000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.994000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.994000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.994000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.994000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.994000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.994000 audit: BPF prog-id=21 op=LOAD Oct 31 05:44:29.994000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdf4104020 a2=98 a3=20 items=0 ppid=3660 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.994000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 05:44:29.995000 audit: BPF prog-id=21 op=UNLOAD Oct 31 05:44:29.997000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.997000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.997000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.997000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.997000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.997000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.997000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.997000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.997000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.997000 audit: BPF prog-id=22 op=LOAD Oct 31 05:44:29.997000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdf4103e30 a2=94 a3=54428f items=0 ppid=3660 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.997000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 05:44:29.997000 audit: BPF prog-id=22 op=UNLOAD Oct 31 05:44:29.997000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.997000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.997000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.997000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.997000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.997000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.997000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.997000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.997000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.997000 audit: BPF prog-id=23 op=LOAD Oct 31 05:44:29.997000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdf4103e60 a2=94 a3=2 items=0 ppid=3660 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.997000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 05:44:29.998000 audit: BPF prog-id=23 op=UNLOAD Oct 31 05:44:29.998000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.998000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdf4103d30 a2=28 a3=0 items=0 ppid=3660 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.998000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 05:44:29.998000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.998000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdf4103d60 a2=28 a3=0 items=0 ppid=3660 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.998000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 05:44:29.998000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.998000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdf4103c70 a2=28 a3=0 items=0 ppid=3660 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.998000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 05:44:29.998000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.998000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdf4103d80 a2=28 a3=0 items=0 ppid=3660 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.998000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 05:44:29.998000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.998000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdf4103d60 a2=28 a3=0 items=0 ppid=3660 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.998000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 05:44:29.998000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.998000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdf4103d50 a2=28 a3=0 items=0 ppid=3660 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.998000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 05:44:29.998000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.998000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdf4103d80 a2=28 a3=0 items=0 ppid=3660 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.998000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 05:44:29.998000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.998000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdf4103d60 a2=28 a3=0 items=0 ppid=3660 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.998000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 05:44:29.998000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.998000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdf4103d80 a2=28 a3=0 items=0 ppid=3660 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.998000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 05:44:29.998000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.998000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffdf4103d50 a2=28 a3=0 items=0 ppid=3660 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.998000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 05:44:29.998000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.998000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffdf4103dc0 a2=28 a3=0 items=0 ppid=3660 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.998000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 05:44:29.999000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.999000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.999000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.999000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.999000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.999000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.999000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.999000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.999000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:29.999000 audit: BPF prog-id=24 op=LOAD Oct 31 05:44:29.999000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdf4103c30 a2=94 a3=0 items=0 ppid=3660 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:29.999000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 05:44:29.999000 audit: BPF prog-id=24 op=UNLOAD Oct 31 05:44:30.001000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.001000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffdf4103c20 a2=50 a3=2800 items=0 ppid=3660 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.001000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 05:44:30.002000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.002000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffdf4103c20 a2=50 a3=2800 items=0 ppid=3660 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.002000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 05:44:30.003000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.003000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.003000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.003000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.003000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.003000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.003000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.003000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.003000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.003000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.003000 audit: BPF prog-id=25 op=LOAD Oct 31 05:44:30.003000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdf4103440 a2=94 a3=2 items=0 ppid=3660 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.003000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 05:44:30.003000 audit: BPF prog-id=25 op=UNLOAD Oct 31 05:44:30.003000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.003000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.003000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.003000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.003000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.003000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.003000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.003000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.003000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.003000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.003000 audit: BPF prog-id=26 op=LOAD Oct 31 05:44:30.003000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdf4103540 a2=94 a3=30 items=0 ppid=3660 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.003000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Oct 31 05:44:30.007000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.007000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.007000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.007000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.007000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.007000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.007000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.007000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.007000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.007000 audit: BPF prog-id=27 op=LOAD Oct 31 05:44:30.007000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff65ef3ff0 a2=98 a3=0 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.007000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.007000 audit: BPF prog-id=27 op=UNLOAD Oct 31 05:44:30.007000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.007000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.007000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.007000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.007000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.007000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.007000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.007000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.007000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.007000 audit: BPF prog-id=28 op=LOAD Oct 31 05:44:30.007000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff65ef3de0 a2=94 a3=54428f items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.007000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.008000 audit: BPF prog-id=28 op=UNLOAD Oct 31 05:44:30.008000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.008000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.008000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.008000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.008000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.008000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.008000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.008000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.008000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.008000 audit: BPF prog-id=29 op=LOAD Oct 31 05:44:30.008000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff65ef3e10 a2=94 a3=2 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.008000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.008000 audit: BPF prog-id=29 op=UNLOAD Oct 31 05:44:30.168144 env[1308]: time="2025-10-31T05:44:30.167694769Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:44:30.173479 env[1308]: time="2025-10-31T05:44:30.173240200Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 05:44:30.174071 kubelet[2197]: E1031 05:44:30.174020 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:44:30.174690 kubelet[2197]: E1031 05:44:30.174645 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:44:30.175163 kubelet[2197]: E1031 05:44:30.175073 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v6crh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7ff9f49d5d-dq5pc_calico-apiserver(ac96e24b-c0dd-48fd-838b-a540fa2a89c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 05:44:30.176970 kubelet[2197]: E1031 05:44:30.176880 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-dq5pc" podUID="ac96e24b-c0dd-48fd-838b-a540fa2a89c0" Oct 31 05:44:30.197000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.197000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.197000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.197000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.197000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.197000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.197000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.197000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.197000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.197000 audit: BPF prog-id=30 op=LOAD Oct 31 05:44:30.197000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff65ef3cd0 a2=94 a3=1 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.197000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.197000 audit: BPF prog-id=30 op=UNLOAD Oct 31 05:44:30.197000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.197000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff65ef3da0 a2=50 a3=7fff65ef3e80 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.197000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff65ef3ce0 a2=28 a3=0 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff65ef3d10 a2=28 a3=0 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff65ef3c20 a2=28 a3=0 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff65ef3d30 a2=28 a3=0 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff65ef3d10 a2=28 a3=0 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff65ef3d00 a2=28 a3=0 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff65ef3d30 a2=28 a3=0 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff65ef3d10 a2=28 a3=0 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff65ef3d30 a2=28 a3=0 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff65ef3d00 a2=28 a3=0 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff65ef3d70 a2=28 a3=0 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff65ef3b20 a2=50 a3=1 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit: BPF prog-id=31 op=LOAD Oct 31 05:44:30.210000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff65ef3b20 a2=94 a3=5 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.210000 audit: BPF prog-id=31 op=UNLOAD Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff65ef3bd0 a2=50 a3=1 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fff65ef3cf0 a2=4 a3=38 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.210000 audit[4052]: AVC avc: denied { confidentiality } for pid=4052 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 05:44:30.210000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff65ef3d40 a2=94 a3=6 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.210000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.211000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.211000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.211000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.211000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.211000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.211000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.211000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.211000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.211000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.211000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.211000 audit[4052]: AVC avc: denied { confidentiality } for pid=4052 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 05:44:30.211000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff65ef34f0 a2=94 a3=88 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.211000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.211000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.211000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.211000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.211000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.211000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.211000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.211000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.211000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.211000 audit[4052]: AVC avc: denied { perfmon } for pid=4052 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.211000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.211000 audit[4052]: AVC avc: denied { confidentiality } for pid=4052 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Oct 31 05:44:30.211000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff65ef34f0 a2=94 a3=88 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.211000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.211000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.211000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff65ef4f20 a2=10 a3=f8f00800 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.211000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.212000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.212000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff65ef4dc0 a2=10 a3=3 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.212000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.212000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.212000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff65ef4d60 a2=10 a3=3 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.212000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.212000 audit[4052]: AVC avc: denied { bpf } for pid=4052 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 31 05:44:30.212000 audit[4052]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff65ef4d60 a2=10 a3=7 items=0 ppid=3660 pid=4052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.212000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Oct 31 05:44:30.215285 systemd-networkd[1069]: cali80519ea3f0c: Gained IPv6LL Oct 31 05:44:30.237000 audit: BPF prog-id=26 op=UNLOAD Oct 31 05:44:30.367000 audit[4096]: NETFILTER_CFG table=mangle:111 family=2 entries=16 op=nft_register_chain pid=4096 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 05:44:30.367000 audit[4096]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffecf06d680 a2=0 a3=7ffecf06d66c items=0 ppid=3660 pid=4096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.367000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 05:44:30.378000 audit[4094]: NETFILTER_CFG table=nat:112 family=2 entries=15 op=nft_register_chain pid=4094 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 05:44:30.378000 audit[4094]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe3eafd6f0 a2=0 a3=7ffe3eafd6dc items=0 ppid=3660 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.378000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 05:44:30.390000 audit[4097]: NETFILTER_CFG table=raw:113 family=2 entries=21 op=nft_register_chain pid=4097 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 05:44:30.390000 audit[4097]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffdfc3177f0 a2=0 a3=7ffdfc3177dc items=0 ppid=3660 pid=4097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.390000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 05:44:30.391000 audit[4095]: NETFILTER_CFG table=filter:114 family=2 entries=222 op=nft_register_chain pid=4095 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 05:44:30.391000 audit[4095]: SYSCALL arch=c000003e syscall=46 success=yes exit=129820 a0=3 a1=7fff693c3880 a2=0 a3=7fff693c386c items=0 ppid=3660 pid=4095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.391000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 05:44:30.647215 kubelet[2197]: E1031 05:44:30.646894 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f447487f8-8md8h" podUID="da74ef2c-d536-4fbc-9b28-ba72dfbbfc21" Oct 31 05:44:30.647917 kubelet[2197]: E1031 05:44:30.647876 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-dq5pc" podUID="ac96e24b-c0dd-48fd-838b-a540fa2a89c0" Oct 31 05:44:30.746000 audit[4112]: NETFILTER_CFG table=filter:115 family=2 entries=14 op=nft_register_rule pid=4112 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:44:30.746000 audit[4112]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe07f0ee80 a2=0 a3=7ffe07f0ee6c items=0 ppid=2302 pid=4112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.746000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:44:30.750000 audit[4112]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=4112 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:44:30.750000 audit[4112]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe07f0ee80 a2=0 a3=7ffe07f0ee6c items=0 ppid=2302 pid=4112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:30.750000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:44:30.854987 systemd-networkd[1069]: calidd85143cbb7: Gained IPv6LL Oct 31 05:44:31.636732 kubelet[2197]: E1031 05:44:31.636681 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-dq5pc" podUID="ac96e24b-c0dd-48fd-838b-a540fa2a89c0" Oct 31 05:44:31.750872 systemd-networkd[1069]: vxlan.calico: Gained IPv6LL Oct 31 05:44:31.768000 audit[4116]: NETFILTER_CFG table=filter:117 family=2 entries=14 op=nft_register_rule pid=4116 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:44:31.768000 audit[4116]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc5d9872b0 a2=0 a3=7ffc5d98729c items=0 ppid=2302 pid=4116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:31.768000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:44:31.788000 audit[4116]: NETFILTER_CFG table=nat:118 family=2 entries=56 op=nft_register_chain pid=4116 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:44:31.788000 audit[4116]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc5d9872b0 a2=0 a3=7ffc5d98729c items=0 ppid=2302 pid=4116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:31.788000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:44:35.133924 env[1308]: time="2025-10-31T05:44:35.133737793Z" level=info msg="StopPodSandbox for \"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595\"" Oct 31 05:44:35.333960 env[1308]: 2025-10-31 05:44:35.263 [WARNING][4128] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-whisker--7b969d5456--zlxmn-eth0" Oct 31 05:44:35.333960 env[1308]: 2025-10-31 05:44:35.263 [INFO][4128] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" Oct 31 05:44:35.333960 env[1308]: 2025-10-31 05:44:35.263 [INFO][4128] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" iface="eth0" netns="" Oct 31 05:44:35.333960 env[1308]: 2025-10-31 05:44:35.263 [INFO][4128] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" Oct 31 05:44:35.333960 env[1308]: 2025-10-31 05:44:35.263 [INFO][4128] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" Oct 31 05:44:35.333960 env[1308]: 2025-10-31 05:44:35.306 [INFO][4137] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" HandleID="k8s-pod-network.df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" Workload="srv--f2mor.gb1.brightbox.com-k8s-whisker--7b969d5456--zlxmn-eth0" Oct 31 05:44:35.333960 env[1308]: 2025-10-31 05:44:35.306 [INFO][4137] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:35.333960 env[1308]: 2025-10-31 05:44:35.306 [INFO][4137] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:35.333960 env[1308]: 2025-10-31 05:44:35.316 [WARNING][4137] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" HandleID="k8s-pod-network.df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" Workload="srv--f2mor.gb1.brightbox.com-k8s-whisker--7b969d5456--zlxmn-eth0" Oct 31 05:44:35.333960 env[1308]: 2025-10-31 05:44:35.316 [INFO][4137] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" HandleID="k8s-pod-network.df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" Workload="srv--f2mor.gb1.brightbox.com-k8s-whisker--7b969d5456--zlxmn-eth0" Oct 31 05:44:35.333960 env[1308]: 2025-10-31 05:44:35.319 [INFO][4137] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:35.333960 env[1308]: 2025-10-31 05:44:35.328 [INFO][4128] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" Oct 31 05:44:35.336281 env[1308]: time="2025-10-31T05:44:35.336216106Z" level=info msg="TearDown network for sandbox \"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595\" successfully" Oct 31 05:44:35.336452 env[1308]: time="2025-10-31T05:44:35.336416288Z" level=info msg="StopPodSandbox for \"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595\" returns successfully" Oct 31 05:44:35.343875 env[1308]: time="2025-10-31T05:44:35.343832934Z" level=info msg="RemovePodSandbox for \"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595\"" Oct 31 05:44:35.343991 env[1308]: time="2025-10-31T05:44:35.343895905Z" level=info msg="Forcibly stopping sandbox \"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595\"" Oct 31 05:44:35.488917 env[1308]: 2025-10-31 05:44:35.418 [WARNING][4152] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-whisker--7b969d5456--zlxmn-eth0" Oct 31 05:44:35.488917 env[1308]: 2025-10-31 05:44:35.418 [INFO][4152] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" Oct 31 05:44:35.488917 env[1308]: 2025-10-31 05:44:35.418 [INFO][4152] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" iface="eth0" netns="" Oct 31 05:44:35.488917 env[1308]: 2025-10-31 05:44:35.418 [INFO][4152] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" Oct 31 05:44:35.488917 env[1308]: 2025-10-31 05:44:35.419 [INFO][4152] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" Oct 31 05:44:35.488917 env[1308]: 2025-10-31 05:44:35.469 [INFO][4165] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" HandleID="k8s-pod-network.df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" Workload="srv--f2mor.gb1.brightbox.com-k8s-whisker--7b969d5456--zlxmn-eth0" Oct 31 05:44:35.488917 env[1308]: 2025-10-31 05:44:35.472 [INFO][4165] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:35.488917 env[1308]: 2025-10-31 05:44:35.472 [INFO][4165] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:35.488917 env[1308]: 2025-10-31 05:44:35.482 [WARNING][4165] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" HandleID="k8s-pod-network.df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" Workload="srv--f2mor.gb1.brightbox.com-k8s-whisker--7b969d5456--zlxmn-eth0" Oct 31 05:44:35.488917 env[1308]: 2025-10-31 05:44:35.482 [INFO][4165] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" HandleID="k8s-pod-network.df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" Workload="srv--f2mor.gb1.brightbox.com-k8s-whisker--7b969d5456--zlxmn-eth0" Oct 31 05:44:35.488917 env[1308]: 2025-10-31 05:44:35.484 [INFO][4165] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:35.488917 env[1308]: 2025-10-31 05:44:35.487 [INFO][4152] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595" Oct 31 05:44:35.489767 env[1308]: time="2025-10-31T05:44:35.489648631Z" level=info msg="TearDown network for sandbox \"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595\" successfully" Oct 31 05:44:35.495661 env[1308]: time="2025-10-31T05:44:35.495573974Z" level=info msg="RemovePodSandbox \"df314ac47be75258d71b1ee12a42de913a4355539db3e97995a19db6b2a42595\" returns successfully" Oct 31 05:44:35.496687 env[1308]: time="2025-10-31T05:44:35.496647978Z" level=info msg="StopPodSandbox for \"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034\"" Oct 31 05:44:35.695119 env[1308]: 2025-10-31 05:44:35.562 [WARNING][4182] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"755f8c0d-5dbb-4026-80b5-87b3cb17189f", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552", Pod:"coredns-668d6bf9bc-sw8r9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80519ea3f0c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:35.695119 env[1308]: 2025-10-31 05:44:35.562 [INFO][4182] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" Oct 31 05:44:35.695119 env[1308]: 2025-10-31 05:44:35.562 [INFO][4182] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" iface="eth0" netns="" Oct 31 05:44:35.695119 env[1308]: 2025-10-31 05:44:35.562 [INFO][4182] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" Oct 31 05:44:35.695119 env[1308]: 2025-10-31 05:44:35.562 [INFO][4182] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" Oct 31 05:44:35.695119 env[1308]: 2025-10-31 05:44:35.608 [INFO][4189] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" HandleID="k8s-pod-network.6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0" Oct 31 05:44:35.695119 env[1308]: 2025-10-31 05:44:35.608 [INFO][4189] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:35.695119 env[1308]: 2025-10-31 05:44:35.608 [INFO][4189] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:35.695119 env[1308]: 2025-10-31 05:44:35.665 [WARNING][4189] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" HandleID="k8s-pod-network.6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0" Oct 31 05:44:35.695119 env[1308]: 2025-10-31 05:44:35.665 [INFO][4189] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" HandleID="k8s-pod-network.6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0" Oct 31 05:44:35.695119 env[1308]: 2025-10-31 05:44:35.677 [INFO][4189] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:35.695119 env[1308]: 2025-10-31 05:44:35.693 [INFO][4182] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" Oct 31 05:44:35.696290 env[1308]: time="2025-10-31T05:44:35.696233048Z" level=info msg="TearDown network for sandbox \"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034\" successfully" Oct 31 05:44:35.696478 env[1308]: time="2025-10-31T05:44:35.696444234Z" level=info msg="StopPodSandbox for \"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034\" returns successfully" Oct 31 05:44:35.723301 env[1308]: time="2025-10-31T05:44:35.723236449Z" level=info msg="RemovePodSandbox for \"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034\"" Oct 31 05:44:35.723524 env[1308]: time="2025-10-31T05:44:35.723298523Z" level=info msg="Forcibly stopping sandbox \"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034\"" Oct 31 05:44:35.926805 env[1308]: 2025-10-31 05:44:35.866 [WARNING][4203] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"755f8c0d-5dbb-4026-80b5-87b3cb17189f", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"3a44d0061976e670f0c38166a21125c5b676e80aa251b08612328a20fc6b0552", Pod:"coredns-668d6bf9bc-sw8r9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali80519ea3f0c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:35.926805 env[1308]: 2025-10-31 05:44:35.867 [INFO][4203] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" Oct 31 05:44:35.926805 env[1308]: 2025-10-31 05:44:35.867 [INFO][4203] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" iface="eth0" netns="" Oct 31 05:44:35.926805 env[1308]: 2025-10-31 05:44:35.867 [INFO][4203] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" Oct 31 05:44:35.926805 env[1308]: 2025-10-31 05:44:35.867 [INFO][4203] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" Oct 31 05:44:35.926805 env[1308]: 2025-10-31 05:44:35.908 [INFO][4210] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" HandleID="k8s-pod-network.6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0" Oct 31 05:44:35.926805 env[1308]: 2025-10-31 05:44:35.908 [INFO][4210] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:35.926805 env[1308]: 2025-10-31 05:44:35.908 [INFO][4210] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:35.926805 env[1308]: 2025-10-31 05:44:35.918 [WARNING][4210] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" HandleID="k8s-pod-network.6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0" Oct 31 05:44:35.926805 env[1308]: 2025-10-31 05:44:35.918 [INFO][4210] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" HandleID="k8s-pod-network.6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sw8r9-eth0" Oct 31 05:44:35.926805 env[1308]: 2025-10-31 05:44:35.921 [INFO][4210] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:35.926805 env[1308]: 2025-10-31 05:44:35.923 [INFO][4203] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034" Oct 31 05:44:35.926805 env[1308]: time="2025-10-31T05:44:35.925734587Z" level=info msg="TearDown network for sandbox \"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034\" successfully" Oct 31 05:44:35.931595 env[1308]: time="2025-10-31T05:44:35.931527466Z" level=info msg="RemovePodSandbox \"6f9315806b5312e3f76ab9de2cf2093a97a69f0c7ed0098b9758eb8872c9a034\" returns successfully" Oct 31 05:44:35.932354 env[1308]: time="2025-10-31T05:44:35.932314817Z" level=info msg="StopPodSandbox for \"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993\"" Oct 31 05:44:36.067635 env[1308]: 2025-10-31 05:44:36.015 [WARNING][4225] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"749c5f31-df45-44a4-9a60-d28a8f071a0b", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53", Pod:"csi-node-driver-6jdvb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali09ab8106a20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:36.067635 env[1308]: 2025-10-31 05:44:36.016 [INFO][4225] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" Oct 31 05:44:36.067635 env[1308]: 2025-10-31 05:44:36.016 [INFO][4225] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" iface="eth0" netns="" Oct 31 05:44:36.067635 env[1308]: 2025-10-31 05:44:36.016 [INFO][4225] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" Oct 31 05:44:36.067635 env[1308]: 2025-10-31 05:44:36.016 [INFO][4225] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" Oct 31 05:44:36.067635 env[1308]: 2025-10-31 05:44:36.051 [INFO][4232] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" HandleID="k8s-pod-network.0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" Workload="srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0" Oct 31 05:44:36.067635 env[1308]: 2025-10-31 05:44:36.051 [INFO][4232] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:36.067635 env[1308]: 2025-10-31 05:44:36.051 [INFO][4232] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:36.067635 env[1308]: 2025-10-31 05:44:36.061 [WARNING][4232] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" HandleID="k8s-pod-network.0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" Workload="srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0" Oct 31 05:44:36.067635 env[1308]: 2025-10-31 05:44:36.061 [INFO][4232] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" HandleID="k8s-pod-network.0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" Workload="srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0" Oct 31 05:44:36.067635 env[1308]: 2025-10-31 05:44:36.063 [INFO][4232] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:36.067635 env[1308]: 2025-10-31 05:44:36.065 [INFO][4225] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" Oct 31 05:44:36.068523 env[1308]: time="2025-10-31T05:44:36.067674587Z" level=info msg="TearDown network for sandbox \"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993\" successfully" Oct 31 05:44:36.068523 env[1308]: time="2025-10-31T05:44:36.067721244Z" level=info msg="StopPodSandbox for \"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993\" returns successfully" Oct 31 05:44:36.070848 env[1308]: time="2025-10-31T05:44:36.070732351Z" level=info msg="RemovePodSandbox for \"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993\"" Oct 31 05:44:36.071054 env[1308]: time="2025-10-31T05:44:36.070986813Z" level=info msg="Forcibly stopping sandbox \"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993\"" Oct 31 05:44:36.193312 env[1308]: 2025-10-31 05:44:36.125 [WARNING][4246] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"749c5f31-df45-44a4-9a60-d28a8f071a0b", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"27da1bf714a7b9834d80897dba6f0e9eba5c46cd8cba8a6efe7ec1569057ce53", Pod:"csi-node-driver-6jdvb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali09ab8106a20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:36.193312 env[1308]: 2025-10-31 05:44:36.125 [INFO][4246] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" Oct 31 05:44:36.193312 env[1308]: 2025-10-31 05:44:36.125 [INFO][4246] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" iface="eth0" netns="" Oct 31 05:44:36.193312 env[1308]: 2025-10-31 05:44:36.125 [INFO][4246] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" Oct 31 05:44:36.193312 env[1308]: 2025-10-31 05:44:36.125 [INFO][4246] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" Oct 31 05:44:36.193312 env[1308]: 2025-10-31 05:44:36.171 [INFO][4253] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" HandleID="k8s-pod-network.0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" Workload="srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0" Oct 31 05:44:36.193312 env[1308]: 2025-10-31 05:44:36.172 [INFO][4253] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:36.193312 env[1308]: 2025-10-31 05:44:36.172 [INFO][4253] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:36.193312 env[1308]: 2025-10-31 05:44:36.183 [WARNING][4253] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" HandleID="k8s-pod-network.0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" Workload="srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0" Oct 31 05:44:36.193312 env[1308]: 2025-10-31 05:44:36.183 [INFO][4253] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" HandleID="k8s-pod-network.0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" Workload="srv--f2mor.gb1.brightbox.com-k8s-csi--node--driver--6jdvb-eth0" Oct 31 05:44:36.193312 env[1308]: 2025-10-31 05:44:36.185 [INFO][4253] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:36.193312 env[1308]: 2025-10-31 05:44:36.187 [INFO][4246] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993" Oct 31 05:44:36.193312 env[1308]: time="2025-10-31T05:44:36.191651878Z" level=info msg="TearDown network for sandbox \"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993\" successfully" Oct 31 05:44:36.197394 env[1308]: time="2025-10-31T05:44:36.197323241Z" level=info msg="RemovePodSandbox \"0aeab1345c7187e706e6d87aca731cabb5b45d5744286539139f1dc6a398d993\" returns successfully" Oct 31 05:44:36.198080 env[1308]: time="2025-10-31T05:44:36.198041544Z" level=info msg="StopPodSandbox for \"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d\"" Oct 31 05:44:36.299723 env[1308]: 2025-10-31 05:44:36.253 [WARNING][4268] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d36a9e7d-9b1c-4050-ab86-4f0f608f5584", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a", Pod:"coredns-668d6bf9bc-4vc6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali36170ee9025", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:36.299723 env[1308]: 2025-10-31 05:44:36.253 [INFO][4268] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" Oct 31 05:44:36.299723 env[1308]: 2025-10-31 05:44:36.253 [INFO][4268] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" iface="eth0" netns="" Oct 31 05:44:36.299723 env[1308]: 2025-10-31 05:44:36.253 [INFO][4268] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" Oct 31 05:44:36.299723 env[1308]: 2025-10-31 05:44:36.254 [INFO][4268] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" Oct 31 05:44:36.299723 env[1308]: 2025-10-31 05:44:36.282 [INFO][4276] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" HandleID="k8s-pod-network.00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0" Oct 31 05:44:36.299723 env[1308]: 2025-10-31 05:44:36.282 [INFO][4276] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:36.299723 env[1308]: 2025-10-31 05:44:36.282 [INFO][4276] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:36.299723 env[1308]: 2025-10-31 05:44:36.292 [WARNING][4276] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" HandleID="k8s-pod-network.00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0" Oct 31 05:44:36.299723 env[1308]: 2025-10-31 05:44:36.292 [INFO][4276] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" HandleID="k8s-pod-network.00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0" Oct 31 05:44:36.299723 env[1308]: 2025-10-31 05:44:36.294 [INFO][4276] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:36.299723 env[1308]: 2025-10-31 05:44:36.297 [INFO][4268] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" Oct 31 05:44:36.300862 env[1308]: time="2025-10-31T05:44:36.300805390Z" level=info msg="TearDown network for sandbox \"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d\" successfully" Oct 31 05:44:36.300996 env[1308]: time="2025-10-31T05:44:36.300963151Z" level=info msg="StopPodSandbox for \"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d\" returns successfully" Oct 31 05:44:36.302188 env[1308]: time="2025-10-31T05:44:36.302137643Z" level=info msg="RemovePodSandbox for \"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d\"" Oct 31 05:44:36.302294 env[1308]: time="2025-10-31T05:44:36.302192327Z" level=info msg="Forcibly stopping sandbox \"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d\"" Oct 31 05:44:36.406341 env[1308]: 2025-10-31 05:44:36.357 [WARNING][4292] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d36a9e7d-9b1c-4050-ab86-4f0f608f5584", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"ef34184328afc73e802aa7879a3fe4b9a57dd12992ed6273bf37a266c1e3733a", Pod:"coredns-668d6bf9bc-4vc6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.24.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali36170ee9025", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:36.406341 env[1308]: 2025-10-31 05:44:36.358 [INFO][4292] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" Oct 31 05:44:36.406341 env[1308]: 2025-10-31 05:44:36.358 [INFO][4292] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" iface="eth0" netns="" Oct 31 05:44:36.406341 env[1308]: 2025-10-31 05:44:36.358 [INFO][4292] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" Oct 31 05:44:36.406341 env[1308]: 2025-10-31 05:44:36.358 [INFO][4292] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" Oct 31 05:44:36.406341 env[1308]: 2025-10-31 05:44:36.385 [INFO][4299] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" HandleID="k8s-pod-network.00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0" Oct 31 05:44:36.406341 env[1308]: 2025-10-31 05:44:36.386 [INFO][4299] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:36.406341 env[1308]: 2025-10-31 05:44:36.386 [INFO][4299] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:36.406341 env[1308]: 2025-10-31 05:44:36.397 [WARNING][4299] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" HandleID="k8s-pod-network.00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0" Oct 31 05:44:36.406341 env[1308]: 2025-10-31 05:44:36.397 [INFO][4299] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" HandleID="k8s-pod-network.00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" Workload="srv--f2mor.gb1.brightbox.com-k8s-coredns--668d6bf9bc--4vc6k-eth0" Oct 31 05:44:36.406341 env[1308]: 2025-10-31 05:44:36.399 [INFO][4299] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:36.406341 env[1308]: 2025-10-31 05:44:36.401 [INFO][4292] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d" Oct 31 05:44:36.407441 env[1308]: time="2025-10-31T05:44:36.407378714Z" level=info msg="TearDown network for sandbox \"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d\" successfully" Oct 31 05:44:36.412381 env[1308]: time="2025-10-31T05:44:36.412342424Z" level=info msg="RemovePodSandbox \"00e0a3d28859f0cf07fdbe6197877c8e7a77c3179101731c42d30c59adf9285d\" returns successfully" Oct 31 05:44:36.413508 env[1308]: time="2025-10-31T05:44:36.413455567Z" level=info msg="StopPodSandbox for \"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc\"" Oct 31 05:44:36.519123 env[1308]: 2025-10-31 05:44:36.468 [WARNING][4314] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0", GenerateName:"calico-apiserver-7ff9f49d5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ac96e24b-c0dd-48fd-838b-a540fa2a89c0", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ff9f49d5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce", Pod:"calico-apiserver-7ff9f49d5d-dq5pc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd85143cbb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:36.519123 env[1308]: 2025-10-31 05:44:36.468 [INFO][4314] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" Oct 31 05:44:36.519123 env[1308]: 2025-10-31 05:44:36.468 [INFO][4314] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" iface="eth0" netns="" Oct 31 05:44:36.519123 env[1308]: 2025-10-31 05:44:36.468 [INFO][4314] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" Oct 31 05:44:36.519123 env[1308]: 2025-10-31 05:44:36.468 [INFO][4314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" Oct 31 05:44:36.519123 env[1308]: 2025-10-31 05:44:36.500 [INFO][4322] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" HandleID="k8s-pod-network.dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0" Oct 31 05:44:36.519123 env[1308]: 2025-10-31 05:44:36.500 [INFO][4322] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:36.519123 env[1308]: 2025-10-31 05:44:36.500 [INFO][4322] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:36.519123 env[1308]: 2025-10-31 05:44:36.512 [WARNING][4322] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" HandleID="k8s-pod-network.dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0" Oct 31 05:44:36.519123 env[1308]: 2025-10-31 05:44:36.512 [INFO][4322] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" HandleID="k8s-pod-network.dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0" Oct 31 05:44:36.519123 env[1308]: 2025-10-31 05:44:36.515 [INFO][4322] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:36.519123 env[1308]: 2025-10-31 05:44:36.517 [INFO][4314] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" Oct 31 05:44:36.520141 env[1308]: time="2025-10-31T05:44:36.519186460Z" level=info msg="TearDown network for sandbox \"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc\" successfully" Oct 31 05:44:36.520141 env[1308]: time="2025-10-31T05:44:36.519277695Z" level=info msg="StopPodSandbox for \"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc\" returns successfully" Oct 31 05:44:36.520601 env[1308]: time="2025-10-31T05:44:36.520517202Z" level=info msg="RemovePodSandbox for \"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc\"" Oct 31 05:44:36.520958 env[1308]: time="2025-10-31T05:44:36.520875510Z" level=info msg="Forcibly stopping sandbox \"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc\"" Oct 31 05:44:36.635378 env[1308]: 2025-10-31 05:44:36.583 [WARNING][4336] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0", GenerateName:"calico-apiserver-7ff9f49d5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ac96e24b-c0dd-48fd-838b-a540fa2a89c0", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ff9f49d5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"6628bcabf0d466fc7613650e368ec5c86d6c84911e3d5abda5227480a5c235ce", Pod:"calico-apiserver-7ff9f49d5d-dq5pc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd85143cbb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:36.635378 env[1308]: 2025-10-31 05:44:36.583 [INFO][4336] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" Oct 31 05:44:36.635378 env[1308]: 2025-10-31 05:44:36.583 [INFO][4336] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" iface="eth0" netns="" Oct 31 05:44:36.635378 env[1308]: 2025-10-31 05:44:36.584 [INFO][4336] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" Oct 31 05:44:36.635378 env[1308]: 2025-10-31 05:44:36.584 [INFO][4336] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" Oct 31 05:44:36.635378 env[1308]: 2025-10-31 05:44:36.617 [INFO][4344] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" HandleID="k8s-pod-network.dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0" Oct 31 05:44:36.635378 env[1308]: 2025-10-31 05:44:36.618 [INFO][4344] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:36.635378 env[1308]: 2025-10-31 05:44:36.618 [INFO][4344] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:36.635378 env[1308]: 2025-10-31 05:44:36.628 [WARNING][4344] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" HandleID="k8s-pod-network.dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0" Oct 31 05:44:36.635378 env[1308]: 2025-10-31 05:44:36.628 [INFO][4344] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" HandleID="k8s-pod-network.dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--dq5pc-eth0" Oct 31 05:44:36.635378 env[1308]: 2025-10-31 05:44:36.630 [INFO][4344] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:36.635378 env[1308]: 2025-10-31 05:44:36.633 [INFO][4336] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc" Oct 31 05:44:36.636346 env[1308]: time="2025-10-31T05:44:36.635422355Z" level=info msg="TearDown network for sandbox \"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc\" successfully" Oct 31 05:44:36.644600 env[1308]: time="2025-10-31T05:44:36.644481169Z" level=info msg="RemovePodSandbox \"dbc28ae30aba19b2928dea16f4a342cc479a12caefe70478d0882e1ba30f73dc\" returns successfully" Oct 31 05:44:38.224878 env[1308]: time="2025-10-31T05:44:38.224708132Z" level=info msg="StopPodSandbox for \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\"" Oct 31 05:44:38.345866 env[1308]: 2025-10-31 05:44:38.298 [INFO][4360] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Oct 31 05:44:38.345866 env[1308]: 2025-10-31 05:44:38.298 [INFO][4360] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" iface="eth0" netns="/var/run/netns/cni-4a42b43b-cae4-4bbd-0e22-94be40231add" Oct 31 05:44:38.345866 env[1308]: 2025-10-31 05:44:38.299 [INFO][4360] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" iface="eth0" netns="/var/run/netns/cni-4a42b43b-cae4-4bbd-0e22-94be40231add" Oct 31 05:44:38.345866 env[1308]: 2025-10-31 05:44:38.299 [INFO][4360] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" iface="eth0" netns="/var/run/netns/cni-4a42b43b-cae4-4bbd-0e22-94be40231add" Oct 31 05:44:38.345866 env[1308]: 2025-10-31 05:44:38.299 [INFO][4360] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Oct 31 05:44:38.345866 env[1308]: 2025-10-31 05:44:38.299 [INFO][4360] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Oct 31 05:44:38.345866 env[1308]: 2025-10-31 05:44:38.329 [INFO][4367] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" HandleID="k8s-pod-network.3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0" Oct 31 05:44:38.345866 env[1308]: 2025-10-31 05:44:38.329 [INFO][4367] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:38.345866 env[1308]: 2025-10-31 05:44:38.329 [INFO][4367] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:38.345866 env[1308]: 2025-10-31 05:44:38.339 [WARNING][4367] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" HandleID="k8s-pod-network.3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0" Oct 31 05:44:38.345866 env[1308]: 2025-10-31 05:44:38.339 [INFO][4367] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" HandleID="k8s-pod-network.3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0" Oct 31 05:44:38.345866 env[1308]: 2025-10-31 05:44:38.341 [INFO][4367] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:38.345866 env[1308]: 2025-10-31 05:44:38.343 [INFO][4360] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Oct 31 05:44:38.351221 systemd[1]: run-netns-cni\x2d4a42b43b\x2dcae4\x2d4bbd\x2d0e22\x2d94be40231add.mount: Deactivated successfully. Oct 31 05:44:38.352566 env[1308]: time="2025-10-31T05:44:38.352476800Z" level=info msg="TearDown network for sandbox \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\" successfully" Oct 31 05:44:38.352697 env[1308]: time="2025-10-31T05:44:38.352662639Z" level=info msg="StopPodSandbox for \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\" returns successfully" Oct 31 05:44:38.354624 env[1308]: time="2025-10-31T05:44:38.354582763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff9f49d5d-sjjrx,Uid:4433a427-a60f-4547-95ae-ea306784cb66,Namespace:calico-apiserver,Attempt:1,}" Oct 31 05:44:38.550850 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 31 05:44:38.551045 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6b4edd9f57a: link becomes ready Oct 31 05:44:38.549186 systemd-networkd[1069]: cali6b4edd9f57a: Link UP Oct 31 05:44:38.553639 systemd-networkd[1069]: cali6b4edd9f57a: Gained carrier Oct 31 05:44:38.586600 env[1308]: 2025-10-31 05:44:38.432 [INFO][4374] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0 calico-apiserver-7ff9f49d5d- calico-apiserver 4433a427-a60f-4547-95ae-ea306784cb66 1074 0 2025-10-31 05:43:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7ff9f49d5d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-f2mor.gb1.brightbox.com calico-apiserver-7ff9f49d5d-sjjrx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6b4edd9f57a [] [] }} ContainerID="0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109" Namespace="calico-apiserver" Pod="calico-apiserver-7ff9f49d5d-sjjrx" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-" Oct 31 05:44:38.586600 env[1308]: 2025-10-31 05:44:38.432 [INFO][4374] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109" Namespace="calico-apiserver" Pod="calico-apiserver-7ff9f49d5d-sjjrx" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0" Oct 31 05:44:38.586600 env[1308]: 2025-10-31 05:44:38.483 [INFO][4386] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109" HandleID="k8s-pod-network.0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0" Oct 31 05:44:38.586600 env[1308]: 2025-10-31 05:44:38.483 [INFO][4386] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109" HandleID="k8s-pod-network.0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c8140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-f2mor.gb1.brightbox.com", "pod":"calico-apiserver-7ff9f49d5d-sjjrx", "timestamp":"2025-10-31 05:44:38.483050368 +0000 UTC"}, Hostname:"srv-f2mor.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 05:44:38.586600 env[1308]: 2025-10-31 05:44:38.483 [INFO][4386] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:38.586600 env[1308]: 2025-10-31 05:44:38.483 [INFO][4386] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:38.586600 env[1308]: 2025-10-31 05:44:38.483 [INFO][4386] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f2mor.gb1.brightbox.com' Oct 31 05:44:38.586600 env[1308]: 2025-10-31 05:44:38.494 [INFO][4386] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:38.586600 env[1308]: 2025-10-31 05:44:38.503 [INFO][4386] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:38.586600 env[1308]: 2025-10-31 05:44:38.510 [INFO][4386] ipam/ipam.go 511: Trying affinity for 192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:38.586600 env[1308]: 2025-10-31 05:44:38.513 [INFO][4386] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:38.586600 env[1308]: 2025-10-31 05:44:38.517 [INFO][4386] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:38.586600 env[1308]: 2025-10-31 05:44:38.517 [INFO][4386] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.128/26 handle="k8s-pod-network.0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:38.586600 env[1308]: 2025-10-31 05:44:38.519 [INFO][4386] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109 Oct 31 05:44:38.586600 env[1308]: 2025-10-31 05:44:38.525 [INFO][4386] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.128/26 handle="k8s-pod-network.0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:38.586600 env[1308]: 2025-10-31 05:44:38.533 [INFO][4386] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.134/26] block=192.168.24.128/26 handle="k8s-pod-network.0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:38.586600 env[1308]: 2025-10-31 05:44:38.533 [INFO][4386] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.134/26] handle="k8s-pod-network.0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:38.586600 env[1308]: 2025-10-31 05:44:38.534 [INFO][4386] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:38.586600 env[1308]: 2025-10-31 05:44:38.534 [INFO][4386] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.134/26] IPv6=[] ContainerID="0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109" HandleID="k8s-pod-network.0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0" Oct 31 05:44:38.588191 env[1308]: 2025-10-31 05:44:38.539 [INFO][4374] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109" Namespace="calico-apiserver" Pod="calico-apiserver-7ff9f49d5d-sjjrx" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0", GenerateName:"calico-apiserver-7ff9f49d5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"4433a427-a60f-4547-95ae-ea306784cb66", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ff9f49d5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-7ff9f49d5d-sjjrx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b4edd9f57a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:38.588191 env[1308]: 2025-10-31 05:44:38.539 [INFO][4374] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.134/32] ContainerID="0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109" Namespace="calico-apiserver" Pod="calico-apiserver-7ff9f49d5d-sjjrx" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0" Oct 31 05:44:38.588191 env[1308]: 2025-10-31 05:44:38.539 [INFO][4374] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b4edd9f57a ContainerID="0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109" Namespace="calico-apiserver" Pod="calico-apiserver-7ff9f49d5d-sjjrx" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0" Oct 31 05:44:38.588191 env[1308]: 2025-10-31 05:44:38.558 [INFO][4374] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109" Namespace="calico-apiserver" Pod="calico-apiserver-7ff9f49d5d-sjjrx" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0" Oct 31 05:44:38.588191 env[1308]: 2025-10-31 05:44:38.559 [INFO][4374] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109" Namespace="calico-apiserver" Pod="calico-apiserver-7ff9f49d5d-sjjrx" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0", GenerateName:"calico-apiserver-7ff9f49d5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"4433a427-a60f-4547-95ae-ea306784cb66", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ff9f49d5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109", Pod:"calico-apiserver-7ff9f49d5d-sjjrx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b4edd9f57a", MAC:"da:ca:b1:12:57:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:38.588191 env[1308]: 2025-10-31 05:44:38.583 [INFO][4374] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109" Namespace="calico-apiserver" Pod="calico-apiserver-7ff9f49d5d-sjjrx" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0" Oct 31 05:44:38.609000 audit[4402]: NETFILTER_CFG table=filter:119 family=2 entries=53 op=nft_register_chain pid=4402 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 05:44:38.612093 kernel: kauditd_printk_skb: 577 callbacks suppressed Oct 31 05:44:38.612202 kernel: audit: type=1325 audit(1761889478.609:436): table=filter:119 family=2 entries=53 op=nft_register_chain pid=4402 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 05:44:38.609000 audit[4402]: SYSCALL arch=c000003e syscall=46 success=yes exit=26640 a0=3 a1=7fff618318e0 a2=0 a3=7fff618318cc items=0 ppid=3660 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:38.623799 kernel: audit: type=1300 audit(1761889478.609:436): arch=c000003e syscall=46 success=yes exit=26640 a0=3 a1=7fff618318e0 a2=0 a3=7fff618318cc items=0 ppid=3660 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:38.609000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 05:44:38.628664 kernel: audit: type=1327 audit(1761889478.609:436): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 05:44:38.642018 env[1308]: time="2025-10-31T05:44:38.641822179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 05:44:38.642018 env[1308]: time="2025-10-31T05:44:38.641932867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 05:44:38.642018 env[1308]: time="2025-10-31T05:44:38.641953140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 05:44:38.654220 env[1308]: time="2025-10-31T05:44:38.644607877Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109 pid=4410 runtime=io.containerd.runc.v2 Oct 31 05:44:38.776654 env[1308]: time="2025-10-31T05:44:38.776489766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff9f49d5d-sjjrx,Uid:4433a427-a60f-4547-95ae-ea306784cb66,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109\"" Oct 31 05:44:38.781584 env[1308]: time="2025-10-31T05:44:38.781549040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 05:44:39.088762 env[1308]: time="2025-10-31T05:44:39.088682430Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:44:39.090155 env[1308]: time="2025-10-31T05:44:39.090101088Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 05:44:39.090505 kubelet[2197]: E1031 05:44:39.090442 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:44:39.091145 kubelet[2197]: E1031 05:44:39.090523 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:44:39.091145 kubelet[2197]: E1031 05:44:39.090745 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dr6kh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7ff9f49d5d-sjjrx_calico-apiserver(4433a427-a60f-4547-95ae-ea306784cb66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 05:44:39.093256 kubelet[2197]: E1031 05:44:39.092395 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-sjjrx" podUID="4433a427-a60f-4547-95ae-ea306784cb66" Oct 31 05:44:39.227330 env[1308]: time="2025-10-31T05:44:39.227247196Z" level=info msg="StopPodSandbox for \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\"" Oct 31 05:44:39.228173 env[1308]: time="2025-10-31T05:44:39.228134162Z" level=info msg="StopPodSandbox for \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\"" Oct 31 05:44:39.351339 systemd[1]: run-containerd-runc-k8s.io-0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109-runc.pyF34D.mount: Deactivated successfully. Oct 31 05:44:39.416839 env[1308]: 2025-10-31 05:44:39.326 [INFO][4463] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Oct 31 05:44:39.416839 env[1308]: 2025-10-31 05:44:39.326 [INFO][4463] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" iface="eth0" netns="/var/run/netns/cni-d186a8cf-a7a1-4e5c-a6e8-2ff385653a83" Oct 31 05:44:39.416839 env[1308]: 2025-10-31 05:44:39.327 [INFO][4463] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" iface="eth0" netns="/var/run/netns/cni-d186a8cf-a7a1-4e5c-a6e8-2ff385653a83" Oct 31 05:44:39.416839 env[1308]: 2025-10-31 05:44:39.328 [INFO][4463] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" iface="eth0" netns="/var/run/netns/cni-d186a8cf-a7a1-4e5c-a6e8-2ff385653a83" Oct 31 05:44:39.416839 env[1308]: 2025-10-31 05:44:39.328 [INFO][4463] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Oct 31 05:44:39.416839 env[1308]: 2025-10-31 05:44:39.328 [INFO][4463] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Oct 31 05:44:39.416839 env[1308]: 2025-10-31 05:44:39.391 [INFO][4483] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" HandleID="k8s-pod-network.5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0" Oct 31 05:44:39.416839 env[1308]: 2025-10-31 05:44:39.391 [INFO][4483] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:39.416839 env[1308]: 2025-10-31 05:44:39.391 [INFO][4483] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:39.416839 env[1308]: 2025-10-31 05:44:39.407 [WARNING][4483] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" HandleID="k8s-pod-network.5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0" Oct 31 05:44:39.416839 env[1308]: 2025-10-31 05:44:39.407 [INFO][4483] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" HandleID="k8s-pod-network.5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0" Oct 31 05:44:39.416839 env[1308]: 2025-10-31 05:44:39.410 [INFO][4483] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:39.416839 env[1308]: 2025-10-31 05:44:39.412 [INFO][4463] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Oct 31 05:44:39.421991 systemd[1]: run-netns-cni\x2dd186a8cf\x2da7a1\x2d4e5c\x2da6e8\x2d2ff385653a83.mount: Deactivated successfully. Oct 31 05:44:39.424768 env[1308]: time="2025-10-31T05:44:39.424582704Z" level=info msg="TearDown network for sandbox \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\" successfully" Oct 31 05:44:39.424942 env[1308]: time="2025-10-31T05:44:39.424906866Z" level=info msg="StopPodSandbox for \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\" returns successfully" Oct 31 05:44:39.426276 env[1308]: time="2025-10-31T05:44:39.426236674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58d798db8c-5nl8j,Uid:6f047019-b3ae-41f9-bdae-4d0664c67b92,Namespace:calico-system,Attempt:1,}" Oct 31 05:44:39.457179 env[1308]: 2025-10-31 05:44:39.343 [INFO][4471] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Oct 31 05:44:39.457179 env[1308]: 2025-10-31 05:44:39.344 [INFO][4471] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" iface="eth0" netns="/var/run/netns/cni-b3a1c44f-860f-a46d-07ff-eedfea58e498" Oct 31 05:44:39.457179 env[1308]: 2025-10-31 05:44:39.344 [INFO][4471] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" iface="eth0" netns="/var/run/netns/cni-b3a1c44f-860f-a46d-07ff-eedfea58e498" Oct 31 05:44:39.457179 env[1308]: 2025-10-31 05:44:39.344 [INFO][4471] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" iface="eth0" netns="/var/run/netns/cni-b3a1c44f-860f-a46d-07ff-eedfea58e498" Oct 31 05:44:39.457179 env[1308]: 2025-10-31 05:44:39.344 [INFO][4471] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Oct 31 05:44:39.457179 env[1308]: 2025-10-31 05:44:39.344 [INFO][4471] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Oct 31 05:44:39.457179 env[1308]: 2025-10-31 05:44:39.405 [INFO][4488] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" HandleID="k8s-pod-network.ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Workload="srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0" Oct 31 05:44:39.457179 env[1308]: 2025-10-31 05:44:39.406 [INFO][4488] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:39.457179 env[1308]: 2025-10-31 05:44:39.413 [INFO][4488] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:39.457179 env[1308]: 2025-10-31 05:44:39.431 [WARNING][4488] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" HandleID="k8s-pod-network.ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Workload="srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0" Oct 31 05:44:39.457179 env[1308]: 2025-10-31 05:44:39.431 [INFO][4488] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" HandleID="k8s-pod-network.ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Workload="srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0" Oct 31 05:44:39.457179 env[1308]: 2025-10-31 05:44:39.435 [INFO][4488] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:39.457179 env[1308]: 2025-10-31 05:44:39.454 [INFO][4471] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Oct 31 05:44:39.462001 systemd[1]: run-netns-cni\x2db3a1c44f\x2d860f\x2da46d\x2d07ff\x2deedfea58e498.mount: Deactivated successfully. Oct 31 05:44:39.463720 env[1308]: time="2025-10-31T05:44:39.463656098Z" level=info msg="TearDown network for sandbox \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\" successfully" Oct 31 05:44:39.463823 env[1308]: time="2025-10-31T05:44:39.463719937Z" level=info msg="StopPodSandbox for \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\" returns successfully" Oct 31 05:44:39.465018 env[1308]: time="2025-10-31T05:44:39.464979251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xdjq9,Uid:2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c,Namespace:calico-system,Attempt:1,}" Oct 31 05:44:39.683652 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 31 05:44:39.685347 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali225666ae0fb: link becomes ready Oct 31 05:44:39.687706 systemd-networkd[1069]: cali225666ae0fb: Link UP Oct 31 05:44:39.688057 systemd-networkd[1069]: cali225666ae0fb: Gained carrier Oct 31 05:44:39.706029 kubelet[2197]: E1031 05:44:39.705918 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-sjjrx" podUID="4433a427-a60f-4547-95ae-ea306784cb66" Oct 31 05:44:39.742911 env[1308]: 2025-10-31 05:44:39.528 [INFO][4506] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0 goldmane-666569f655- calico-system 2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c 1086 0 2025-10-31 05:43:57 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-f2mor.gb1.brightbox.com goldmane-666569f655-xdjq9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali225666ae0fb [] [] }} ContainerID="ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c" Namespace="calico-system" Pod="goldmane-666569f655-xdjq9" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-" Oct 31 05:44:39.742911 env[1308]: 2025-10-31 05:44:39.528 [INFO][4506] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c" Namespace="calico-system" Pod="goldmane-666569f655-xdjq9" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0" Oct 31 05:44:39.742911 env[1308]: 2025-10-31 05:44:39.593 [INFO][4521] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c" HandleID="k8s-pod-network.ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c" Workload="srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0" Oct 31 05:44:39.742911 env[1308]: 2025-10-31 05:44:39.594 [INFO][4521] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c" HandleID="k8s-pod-network.ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c" Workload="srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003257d0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-f2mor.gb1.brightbox.com", "pod":"goldmane-666569f655-xdjq9", "timestamp":"2025-10-31 05:44:39.593597043 +0000 UTC"}, Hostname:"srv-f2mor.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 05:44:39.742911 env[1308]: 2025-10-31 05:44:39.596 [INFO][4521] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:39.742911 env[1308]: 2025-10-31 05:44:39.597 [INFO][4521] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:39.742911 env[1308]: 2025-10-31 05:44:39.597 [INFO][4521] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f2mor.gb1.brightbox.com' Oct 31 05:44:39.742911 env[1308]: 2025-10-31 05:44:39.609 [INFO][4521] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:39.742911 env[1308]: 2025-10-31 05:44:39.617 [INFO][4521] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:39.742911 env[1308]: 2025-10-31 05:44:39.624 [INFO][4521] ipam/ipam.go 511: Trying affinity for 192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:39.742911 env[1308]: 2025-10-31 05:44:39.626 [INFO][4521] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:39.742911 env[1308]: 2025-10-31 05:44:39.630 [INFO][4521] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:39.742911 env[1308]: 2025-10-31 05:44:39.630 [INFO][4521] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.128/26 handle="k8s-pod-network.ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:39.742911 env[1308]: 2025-10-31 05:44:39.632 [INFO][4521] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c Oct 31 05:44:39.742911 env[1308]: 2025-10-31 05:44:39.640 [INFO][4521] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.128/26 handle="k8s-pod-network.ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:39.742911 env[1308]: 2025-10-31 05:44:39.663 [INFO][4521] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.135/26] block=192.168.24.128/26 handle="k8s-pod-network.ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:39.742911 env[1308]: 2025-10-31 05:44:39.663 [INFO][4521] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.135/26] handle="k8s-pod-network.ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:39.742911 env[1308]: 2025-10-31 05:44:39.663 [INFO][4521] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:39.742911 env[1308]: 2025-10-31 05:44:39.663 [INFO][4521] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.135/26] IPv6=[] ContainerID="ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c" HandleID="k8s-pod-network.ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c" Workload="srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0" Oct 31 05:44:39.746812 env[1308]: 2025-10-31 05:44:39.669 [INFO][4506] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c" Namespace="calico-system" Pod="goldmane-666569f655-xdjq9" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-xdjq9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.24.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali225666ae0fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:39.746812 env[1308]: 2025-10-31 05:44:39.669 [INFO][4506] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.135/32] ContainerID="ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c" Namespace="calico-system" Pod="goldmane-666569f655-xdjq9" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0" Oct 31 05:44:39.746812 env[1308]: 2025-10-31 05:44:39.670 [INFO][4506] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali225666ae0fb ContainerID="ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c" Namespace="calico-system" Pod="goldmane-666569f655-xdjq9" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0" Oct 31 05:44:39.746812 env[1308]: 2025-10-31 05:44:39.690 [INFO][4506] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c" Namespace="calico-system" Pod="goldmane-666569f655-xdjq9" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0" Oct 31 05:44:39.746812 env[1308]: 2025-10-31 05:44:39.694 [INFO][4506] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c" Namespace="calico-system" Pod="goldmane-666569f655-xdjq9" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c", Pod:"goldmane-666569f655-xdjq9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.24.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali225666ae0fb", MAC:"f2:9a:e3:33:db:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:39.746812 env[1308]: 2025-10-31 05:44:39.717 [INFO][4506] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c" Namespace="calico-system" Pod="goldmane-666569f655-xdjq9" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0" Oct 31 05:44:39.751733 systemd-networkd[1069]: cali6b4edd9f57a: Gained IPv6LL Oct 31 05:44:39.799512 env[1308]: time="2025-10-31T05:44:39.799118555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 05:44:39.799512 env[1308]: time="2025-10-31T05:44:39.799188402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 05:44:39.799512 env[1308]: time="2025-10-31T05:44:39.799209277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 05:44:39.799512 env[1308]: time="2025-10-31T05:44:39.799406488Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c pid=4556 runtime=io.containerd.runc.v2 Oct 31 05:44:39.807000 audit[4555]: NETFILTER_CFG table=filter:120 family=2 entries=14 op=nft_register_rule pid=4555 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:44:39.807000 audit[4555]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff88f9d5a0 a2=0 a3=7fff88f9d58c items=0 ppid=2302 pid=4555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:39.822874 kernel: audit: type=1325 audit(1761889479.807:437): table=filter:120 family=2 entries=14 op=nft_register_rule pid=4555 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:44:39.822993 kernel: audit: type=1300 audit(1761889479.807:437): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff88f9d5a0 a2=0 a3=7fff88f9d58c items=0 ppid=2302 pid=4555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:39.807000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:44:39.827664 kernel: audit: type=1327 audit(1761889479.807:437): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:44:39.827876 kernel: audit: type=1325 audit(1761889479.821:438): table=nat:121 family=2 entries=20 op=nft_register_rule pid=4555 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:44:39.821000 audit[4555]: NETFILTER_CFG table=nat:121 family=2 entries=20 op=nft_register_rule pid=4555 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:44:39.821000 audit[4555]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff88f9d5a0 a2=0 a3=7fff88f9d58c items=0 ppid=2302 pid=4555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:39.839500 systemd-networkd[1069]: cali7611c19a5e1: Link UP Oct 31 05:44:39.841559 kernel: audit: type=1300 audit(1761889479.821:438): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff88f9d5a0 a2=0 a3=7fff88f9d58c items=0 ppid=2302 pid=4555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:39.853880 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7611c19a5e1: link becomes ready Oct 31 05:44:39.821000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:44:39.854175 systemd-networkd[1069]: cali7611c19a5e1: Gained carrier Oct 31 05:44:39.861922 kernel: audit: type=1327 audit(1761889479.821:438): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:44:39.861000 audit[4574]: NETFILTER_CFG table=filter:122 family=2 entries=64 op=nft_register_chain pid=4574 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 05:44:39.867628 kernel: audit: type=1325 audit(1761889479.861:439): table=filter:122 family=2 entries=64 op=nft_register_chain pid=4574 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 05:44:39.861000 audit[4574]: SYSCALL arch=c000003e syscall=46 success=yes exit=31120 a0=3 a1=7fffd839ac00 a2=0 a3=7fffd839abec items=0 ppid=3660 pid=4574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:39.861000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 05:44:39.893884 env[1308]: 2025-10-31 05:44:39.553 [INFO][4497] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0 calico-kube-controllers-58d798db8c- calico-system 6f047019-b3ae-41f9-bdae-4d0664c67b92 1085 0 2025-10-31 05:43:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58d798db8c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-f2mor.gb1.brightbox.com calico-kube-controllers-58d798db8c-5nl8j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7611c19a5e1 [] [] }} ContainerID="da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2" Namespace="calico-system" Pod="calico-kube-controllers-58d798db8c-5nl8j" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-" Oct 31 05:44:39.893884 env[1308]: 2025-10-31 05:44:39.553 [INFO][4497] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2" Namespace="calico-system" Pod="calico-kube-controllers-58d798db8c-5nl8j" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0" Oct 31 05:44:39.893884 env[1308]: 2025-10-31 05:44:39.617 [INFO][4529] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2" HandleID="k8s-pod-network.da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0" Oct 31 05:44:39.893884 env[1308]: 2025-10-31 05:44:39.617 [INFO][4529] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2" HandleID="k8s-pod-network.da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ccfe0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-f2mor.gb1.brightbox.com", "pod":"calico-kube-controllers-58d798db8c-5nl8j", "timestamp":"2025-10-31 05:44:39.617087949 +0000 UTC"}, Hostname:"srv-f2mor.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 05:44:39.893884 env[1308]: 2025-10-31 05:44:39.617 [INFO][4529] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:44:39.893884 env[1308]: 2025-10-31 05:44:39.663 [INFO][4529] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:44:39.893884 env[1308]: 2025-10-31 05:44:39.663 [INFO][4529] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-f2mor.gb1.brightbox.com' Oct 31 05:44:39.893884 env[1308]: 2025-10-31 05:44:39.744 [INFO][4529] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:39.893884 env[1308]: 2025-10-31 05:44:39.783 [INFO][4529] ipam/ipam.go 394: Looking up existing affinities for host host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:39.893884 env[1308]: 2025-10-31 05:44:39.790 [INFO][4529] ipam/ipam.go 511: Trying affinity for 192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:39.893884 env[1308]: 2025-10-31 05:44:39.793 [INFO][4529] ipam/ipam.go 158: Attempting to load block cidr=192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:39.893884 env[1308]: 2025-10-31 05:44:39.798 [INFO][4529] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.24.128/26 host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:39.893884 env[1308]: 2025-10-31 05:44:39.798 [INFO][4529] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.24.128/26 handle="k8s-pod-network.da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:39.893884 env[1308]: 2025-10-31 05:44:39.801 [INFO][4529] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2 Oct 31 05:44:39.893884 env[1308]: 2025-10-31 05:44:39.809 [INFO][4529] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.24.128/26 handle="k8s-pod-network.da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:39.893884 env[1308]: 2025-10-31 05:44:39.828 [INFO][4529] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.24.136/26] block=192.168.24.128/26 handle="k8s-pod-network.da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:39.893884 env[1308]: 2025-10-31 05:44:39.828 [INFO][4529] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.24.136/26] handle="k8s-pod-network.da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2" host="srv-f2mor.gb1.brightbox.com" Oct 31 05:44:39.893884 env[1308]: 2025-10-31 05:44:39.828 [INFO][4529] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:44:39.893884 env[1308]: 2025-10-31 05:44:39.828 [INFO][4529] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.24.136/26] IPv6=[] ContainerID="da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2" HandleID="k8s-pod-network.da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0" Oct 31 05:44:39.895747 env[1308]: 2025-10-31 05:44:39.834 [INFO][4497] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2" Namespace="calico-system" Pod="calico-kube-controllers-58d798db8c-5nl8j" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0", GenerateName:"calico-kube-controllers-58d798db8c-", Namespace:"calico-system", SelfLink:"", UID:"6f047019-b3ae-41f9-bdae-4d0664c67b92", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58d798db8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-58d798db8c-5nl8j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7611c19a5e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:39.895747 env[1308]: 2025-10-31 05:44:39.835 [INFO][4497] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.24.136/32] ContainerID="da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2" Namespace="calico-system" Pod="calico-kube-controllers-58d798db8c-5nl8j" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0" Oct 31 05:44:39.895747 env[1308]: 2025-10-31 05:44:39.835 [INFO][4497] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7611c19a5e1 ContainerID="da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2" Namespace="calico-system" Pod="calico-kube-controllers-58d798db8c-5nl8j" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0" Oct 31 05:44:39.895747 env[1308]: 2025-10-31 05:44:39.841 [INFO][4497] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2" Namespace="calico-system" Pod="calico-kube-controllers-58d798db8c-5nl8j" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0" Oct 31 05:44:39.895747 env[1308]: 2025-10-31 05:44:39.841 [INFO][4497] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2" Namespace="calico-system" Pod="calico-kube-controllers-58d798db8c-5nl8j" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0", GenerateName:"calico-kube-controllers-58d798db8c-", Namespace:"calico-system", SelfLink:"", UID:"6f047019-b3ae-41f9-bdae-4d0664c67b92", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58d798db8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2", Pod:"calico-kube-controllers-58d798db8c-5nl8j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7611c19a5e1", MAC:"2e:0c:9c:a7:81:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:44:39.895747 env[1308]: 2025-10-31 05:44:39.877 [INFO][4497] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2" Namespace="calico-system" Pod="calico-kube-controllers-58d798db8c-5nl8j" WorkloadEndpoint="srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0" Oct 31 05:44:39.903000 audit[4582]: NETFILTER_CFG table=filter:123 family=2 entries=60 op=nft_register_chain pid=4582 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Oct 31 05:44:39.903000 audit[4582]: SYSCALL arch=c000003e syscall=46 success=yes exit=26704 a0=3 a1=7ffc72a3bd70 a2=0 a3=7ffc72a3bd5c items=0 ppid=3660 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:39.903000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Oct 31 05:44:39.936006 env[1308]: time="2025-10-31T05:44:39.935882174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 05:44:39.936222 env[1308]: time="2025-10-31T05:44:39.936016884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 05:44:39.936222 env[1308]: time="2025-10-31T05:44:39.936072127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 05:44:39.936351 env[1308]: time="2025-10-31T05:44:39.936274542Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2 pid=4602 runtime=io.containerd.runc.v2 Oct 31 05:44:40.005877 env[1308]: time="2025-10-31T05:44:40.005779237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-xdjq9,Uid:2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c,Namespace:calico-system,Attempt:1,} returns sandbox id \"ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c\"" Oct 31 05:44:40.019465 env[1308]: time="2025-10-31T05:44:40.019396359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 05:44:40.090149 env[1308]: time="2025-10-31T05:44:40.090073117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58d798db8c-5nl8j,Uid:6f047019-b3ae-41f9-bdae-4d0664c67b92,Namespace:calico-system,Attempt:1,} returns sandbox id \"da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2\"" Oct 31 05:44:40.334572 env[1308]: time="2025-10-31T05:44:40.334132962Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:44:40.335805 env[1308]: time="2025-10-31T05:44:40.335722114Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 05:44:40.336166 kubelet[2197]: E1031 05:44:40.336111 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 05:44:40.336683 kubelet[2197]: E1031 05:44:40.336648 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 05:44:40.337218 kubelet[2197]: E1031 05:44:40.337140 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8ck5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xdjq9_calico-system(2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 05:44:40.338570 env[1308]: time="2025-10-31T05:44:40.337315568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 05:44:40.339687 kubelet[2197]: E1031 05:44:40.339645 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xdjq9" podUID="2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c" Oct 31 05:44:40.651703 env[1308]: time="2025-10-31T05:44:40.651077921Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:44:40.652713 env[1308]: time="2025-10-31T05:44:40.652488073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 05:44:40.653063 kubelet[2197]: E1031 05:44:40.652916 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 05:44:40.653063 kubelet[2197]: E1031 05:44:40.653004 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 05:44:40.653427 kubelet[2197]: E1031 05:44:40.653260 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bn8m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-58d798db8c-5nl8j_calico-system(6f047019-b3ae-41f9-bdae-4d0664c67b92): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 05:44:40.654762 kubelet[2197]: E1031 05:44:40.654655 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-58d798db8c-5nl8j" podUID="6f047019-b3ae-41f9-bdae-4d0664c67b92" Oct 31 05:44:40.712037 kubelet[2197]: E1031 05:44:40.709579 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-58d798db8c-5nl8j" podUID="6f047019-b3ae-41f9-bdae-4d0664c67b92" Oct 31 05:44:40.717776 kubelet[2197]: E1031 05:44:40.717735 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xdjq9" podUID="2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c" Oct 31 05:44:40.720730 kubelet[2197]: E1031 05:44:40.720682 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-sjjrx" podUID="4433a427-a60f-4547-95ae-ea306784cb66" Oct 31 05:44:40.811000 audit[4651]: NETFILTER_CFG table=filter:124 family=2 entries=14 op=nft_register_rule pid=4651 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:44:40.811000 audit[4651]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdcd18b080 a2=0 a3=7ffdcd18b06c items=0 ppid=2302 pid=4651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:40.811000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:44:40.816000 audit[4651]: NETFILTER_CFG table=nat:125 family=2 entries=20 op=nft_register_rule pid=4651 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:44:40.816000 audit[4651]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdcd18b080 a2=0 a3=7ffdcd18b06c items=0 ppid=2302 pid=4651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:44:40.816000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:44:41.094794 systemd-networkd[1069]: cali225666ae0fb: Gained IPv6LL Oct 31 05:44:41.730539 kubelet[2197]: E1031 05:44:41.729491 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-58d798db8c-5nl8j" podUID="6f047019-b3ae-41f9-bdae-4d0664c67b92" Oct 31 05:44:41.733935 kubelet[2197]: E1031 05:44:41.733851 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xdjq9" podUID="2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c" Oct 31 05:44:41.800588 systemd-networkd[1069]: cali7611c19a5e1: Gained IPv6LL Oct 31 05:44:42.228593 env[1308]: time="2025-10-31T05:44:42.228412238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 05:44:42.649512 env[1308]: time="2025-10-31T05:44:42.649424890Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:44:42.651113 env[1308]: time="2025-10-31T05:44:42.651038812Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 05:44:42.651527 kubelet[2197]: E1031 05:44:42.651438 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:44:42.651657 kubelet[2197]: E1031 05:44:42.651596 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:44:42.652410 kubelet[2197]: E1031 05:44:42.652192 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v6crh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7ff9f49d5d-dq5pc_calico-apiserver(ac96e24b-c0dd-48fd-838b-a540fa2a89c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 05:44:42.653611 kubelet[2197]: E1031 05:44:42.653548 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-dq5pc" podUID="ac96e24b-c0dd-48fd-838b-a540fa2a89c0" Oct 31 05:44:45.226610 env[1308]: time="2025-10-31T05:44:45.225847585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 05:44:45.539369 env[1308]: time="2025-10-31T05:44:45.539277516Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:44:45.541288 env[1308]: time="2025-10-31T05:44:45.541211222Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 05:44:45.541756 kubelet[2197]: E1031 05:44:45.541698 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 05:44:45.542387 kubelet[2197]: E1031 05:44:45.542350 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 05:44:45.542896 kubelet[2197]: E1031 05:44:45.542832 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8668490f11434cfabca44ddf284789cf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tjkfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f447487f8-8md8h_calico-system(da74ef2c-d536-4fbc-9b28-ba72dfbbfc21): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 05:44:45.543964 env[1308]: time="2025-10-31T05:44:45.543921068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 05:44:45.850356 env[1308]: time="2025-10-31T05:44:45.850076595Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:44:45.852138 env[1308]: time="2025-10-31T05:44:45.852078060Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 05:44:45.853180 kubelet[2197]: E1031 05:44:45.852572 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 05:44:45.853180 kubelet[2197]: E1031 05:44:45.852640 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 05:44:45.853180 kubelet[2197]: E1031 05:44:45.853010 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9rn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6jdvb_calico-system(749c5f31-df45-44a4-9a60-d28a8f071a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 05:44:45.853523 env[1308]: time="2025-10-31T05:44:45.853127057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 05:44:46.160229 env[1308]: time="2025-10-31T05:44:46.160017061Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:44:46.162026 env[1308]: time="2025-10-31T05:44:46.161960318Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 05:44:46.163272 kubelet[2197]: E1031 05:44:46.162399 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 05:44:46.163272 kubelet[2197]: E1031 05:44:46.162463 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 05:44:46.163272 kubelet[2197]: E1031 05:44:46.162765 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjkfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f447487f8-8md8h_calico-system(da74ef2c-d536-4fbc-9b28-ba72dfbbfc21): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 05:44:46.164019 env[1308]: time="2025-10-31T05:44:46.163967694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 05:44:46.164669 kubelet[2197]: E1031 05:44:46.164578 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f447487f8-8md8h" podUID="da74ef2c-d536-4fbc-9b28-ba72dfbbfc21" Oct 31 05:44:46.468656 env[1308]: time="2025-10-31T05:44:46.468440918Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:44:46.471027 env[1308]: time="2025-10-31T05:44:46.470923401Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 05:44:46.471833 kubelet[2197]: E1031 05:44:46.471380 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 05:44:46.471833 kubelet[2197]: E1031 05:44:46.471469 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 05:44:46.471833 kubelet[2197]: E1031 05:44:46.471697 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9rn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6jdvb_calico-system(749c5f31-df45-44a4-9a60-d28a8f071a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 05:44:46.473393 kubelet[2197]: E1031 05:44:46.473333 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6jdvb" podUID="749c5f31-df45-44a4-9a60-d28a8f071a0b" Oct 31 05:44:53.225070 kubelet[2197]: E1031 05:44:53.225013 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-dq5pc" podUID="ac96e24b-c0dd-48fd-838b-a540fa2a89c0" Oct 31 05:44:54.226148 env[1308]: time="2025-10-31T05:44:54.225817176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 05:44:54.532252 env[1308]: time="2025-10-31T05:44:54.532127313Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:44:54.534194 env[1308]: time="2025-10-31T05:44:54.533987714Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 05:44:54.534634 kubelet[2197]: E1031 05:44:54.534560 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 05:44:54.535468 kubelet[2197]: E1031 05:44:54.535399 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 05:44:54.536058 kubelet[2197]: E1031 05:44:54.535966 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bn8m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-58d798db8c-5nl8j_calico-system(6f047019-b3ae-41f9-bdae-4d0664c67b92): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 05:44:54.537639 kubelet[2197]: E1031 05:44:54.537592 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-58d798db8c-5nl8j" podUID="6f047019-b3ae-41f9-bdae-4d0664c67b92" Oct 31 05:44:55.226349 env[1308]: time="2025-10-31T05:44:55.226188258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 05:44:55.529358 env[1308]: time="2025-10-31T05:44:55.529234508Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:44:55.530752 env[1308]: time="2025-10-31T05:44:55.530583669Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 05:44:55.531169 kubelet[2197]: E1031 05:44:55.531092 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:44:55.531363 kubelet[2197]: E1031 05:44:55.531316 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:44:55.532081 env[1308]: time="2025-10-31T05:44:55.532035965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 05:44:55.532461 kubelet[2197]: E1031 05:44:55.532352 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dr6kh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7ff9f49d5d-sjjrx_calico-apiserver(4433a427-a60f-4547-95ae-ea306784cb66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 05:44:55.534893 kubelet[2197]: E1031 05:44:55.534017 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-sjjrx" podUID="4433a427-a60f-4547-95ae-ea306784cb66" Oct 31 05:44:55.843355 env[1308]: time="2025-10-31T05:44:55.843164744Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:44:55.848727 env[1308]: time="2025-10-31T05:44:55.848660990Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 05:44:55.849857 kubelet[2197]: E1031 05:44:55.849801 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 05:44:55.849987 kubelet[2197]: E1031 05:44:55.849872 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 05:44:55.850105 kubelet[2197]: E1031 05:44:55.850036 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8ck5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xdjq9_calico-system(2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 05:44:55.852478 kubelet[2197]: E1031 05:44:55.851456 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xdjq9" podUID="2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c" Oct 31 05:44:57.591292 systemd[1]: run-containerd-runc-k8s.io-c0733a146273318292491838c0ff78b61e5f8f0a0a049924c2c2b81c5383735e-runc.73SEy0.mount: Deactivated successfully. Oct 31 05:44:59.228736 kubelet[2197]: E1031 05:44:59.228649 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6jdvb" podUID="749c5f31-df45-44a4-9a60-d28a8f071a0b" Oct 31 05:45:00.225787 kubelet[2197]: E1031 05:45:00.225705 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f447487f8-8md8h" podUID="da74ef2c-d536-4fbc-9b28-ba72dfbbfc21" Oct 31 05:45:00.269255 systemd[1]: Started sshd@9-10.244.21.74:22-139.178.68.195:36546.service. Oct 31 05:45:00.282898 kernel: kauditd_printk_skb: 11 callbacks suppressed Oct 31 05:45:00.284278 kernel: audit: type=1130 audit(1761889500.269:443): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.244.21.74:22-139.178.68.195:36546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:00.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.244.21.74:22-139.178.68.195:36546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:01.251011 sshd[4690]: Accepted publickey for core from 139.178.68.195 port 36546 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:45:01.260650 kernel: audit: type=1101 audit(1761889501.250:444): pid=4690 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:01.250000 audit[4690]: USER_ACCT pid=4690 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:01.261000 audit[4690]: CRED_ACQ pid=4690 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:01.268554 kernel: audit: type=1103 audit(1761889501.261:445): pid=4690 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:01.272558 kernel: audit: type=1006 audit(1761889501.261:446): pid=4690 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Oct 31 05:45:01.261000 audit[4690]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1fdb5be0 a2=3 a3=0 items=0 ppid=1 pid=4690 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:01.279576 kernel: audit: type=1300 audit(1761889501.261:446): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1fdb5be0 a2=3 a3=0 items=0 ppid=1 pid=4690 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:01.261000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 05:45:01.282565 kernel: audit: type=1327 audit(1761889501.261:446): proctitle=737368643A20636F7265205B707269765D Oct 31 05:45:01.283776 sshd[4690]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:45:01.309994 systemd-logind[1296]: New session 10 of user core. Oct 31 05:45:01.313638 systemd[1]: Started session-10.scope. Oct 31 05:45:01.334000 audit[4690]: USER_START pid=4690 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:01.345254 kernel: audit: type=1105 audit(1761889501.334:447): pid=4690 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:01.343000 audit[4693]: CRED_ACQ pid=4693 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:01.353824 kernel: audit: type=1103 audit(1761889501.343:448): pid=4693 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:02.558689 sshd[4690]: pam_unix(sshd:session): session closed for user core Oct 31 05:45:02.561000 audit[4690]: USER_END pid=4690 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:02.567732 systemd[1]: sshd@9-10.244.21.74:22-139.178.68.195:36546.service: Deactivated successfully. Oct 31 05:45:02.569253 systemd[1]: session-10.scope: Deactivated successfully. Oct 31 05:45:02.572978 kernel: audit: type=1106 audit(1761889502.561:449): pid=4690 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:02.573017 systemd-logind[1296]: Session 10 logged out. Waiting for processes to exit. Oct 31 05:45:02.561000 audit[4690]: CRED_DISP pid=4690 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:02.577281 systemd-logind[1296]: Removed session 10. Oct 31 05:45:02.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.244.21.74:22-139.178.68.195:36546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:02.582064 kernel: audit: type=1104 audit(1761889502.561:450): pid=4690 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:07.226652 env[1308]: time="2025-10-31T05:45:07.226590533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 05:45:07.561561 env[1308]: time="2025-10-31T05:45:07.560924794Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:45:07.563924 env[1308]: time="2025-10-31T05:45:07.563819361Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 05:45:07.564510 kubelet[2197]: E1031 05:45:07.564441 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:45:07.565289 kubelet[2197]: E1031 05:45:07.565235 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:45:07.566599 kubelet[2197]: E1031 05:45:07.566504 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v6crh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7ff9f49d5d-dq5pc_calico-apiserver(ac96e24b-c0dd-48fd-838b-a540fa2a89c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 05:45:07.574199 kubelet[2197]: E1031 05:45:07.574066 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-dq5pc" podUID="ac96e24b-c0dd-48fd-838b-a540fa2a89c0" Oct 31 05:45:07.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.244.21.74:22-139.178.68.195:57372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:07.716258 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 05:45:07.716362 kernel: audit: type=1130 audit(1761889507.708:452): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.244.21.74:22-139.178.68.195:57372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:07.709698 systemd[1]: Started sshd@10-10.244.21.74:22-139.178.68.195:57372.service. Oct 31 05:45:08.633000 audit[4705]: USER_ACCT pid=4705 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:08.641675 kernel: audit: type=1101 audit(1761889508.633:453): pid=4705 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:08.641804 sshd[4705]: Accepted publickey for core from 139.178.68.195 port 57372 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:45:08.640000 audit[4705]: CRED_ACQ pid=4705 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:08.643008 sshd[4705]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:45:08.649935 kernel: audit: type=1103 audit(1761889508.640:454): pid=4705 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:08.650025 kernel: audit: type=1006 audit(1761889508.640:455): pid=4705 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Oct 31 05:45:08.640000 audit[4705]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed88e9130 a2=3 a3=0 items=0 ppid=1 pid=4705 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:08.660879 kernel: audit: type=1300 audit(1761889508.640:455): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed88e9130 a2=3 a3=0 items=0 ppid=1 pid=4705 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:08.640000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 05:45:08.663500 kernel: audit: type=1327 audit(1761889508.640:455): proctitle=737368643A20636F7265205B707269765D Oct 31 05:45:08.665638 systemd-logind[1296]: New session 11 of user core. Oct 31 05:45:08.669965 systemd[1]: Started session-11.scope. Oct 31 05:45:08.680000 audit[4705]: USER_START pid=4705 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:08.696786 kernel: audit: type=1105 audit(1761889508.680:456): pid=4705 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:08.696935 kernel: audit: type=1103 audit(1761889508.691:457): pid=4708 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:08.691000 audit[4708]: CRED_ACQ pid=4708 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:09.228320 kubelet[2197]: E1031 05:45:09.228208 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-58d798db8c-5nl8j" podUID="6f047019-b3ae-41f9-bdae-4d0664c67b92" Oct 31 05:45:09.244936 kubelet[2197]: E1031 05:45:09.244851 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xdjq9" podUID="2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c" Oct 31 05:45:09.418848 sshd[4705]: pam_unix(sshd:session): session closed for user core Oct 31 05:45:09.419000 audit[4705]: USER_END pid=4705 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:09.448835 kernel: audit: type=1106 audit(1761889509.419:458): pid=4705 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:09.448597 systemd[1]: sshd@10-10.244.21.74:22-139.178.68.195:57372.service: Deactivated successfully. Oct 31 05:45:09.460312 kernel: audit: type=1104 audit(1761889509.424:459): pid=4705 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:09.424000 audit[4705]: CRED_DISP pid=4705 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:09.452645 systemd[1]: session-11.scope: Deactivated successfully. Oct 31 05:45:09.457923 systemd-logind[1296]: Session 11 logged out. Waiting for processes to exit. Oct 31 05:45:09.463100 systemd-logind[1296]: Removed session 11. Oct 31 05:45:09.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.244.21.74:22-139.178.68.195:57372 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:10.225908 kubelet[2197]: E1031 05:45:10.225837 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-sjjrx" podUID="4433a427-a60f-4547-95ae-ea306784cb66" Oct 31 05:45:10.226318 env[1308]: time="2025-10-31T05:45:10.226235427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 05:45:10.534620 env[1308]: time="2025-10-31T05:45:10.534484332Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:45:10.536492 env[1308]: time="2025-10-31T05:45:10.536383568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 05:45:10.537181 kubelet[2197]: E1031 05:45:10.536989 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 05:45:10.537181 kubelet[2197]: E1031 05:45:10.537089 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 05:45:10.537846 kubelet[2197]: E1031 05:45:10.537344 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9rn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6jdvb_calico-system(749c5f31-df45-44a4-9a60-d28a8f071a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 05:45:10.540340 env[1308]: time="2025-10-31T05:45:10.540268623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 05:45:10.884435 env[1308]: time="2025-10-31T05:45:10.884196761Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:45:10.886398 env[1308]: time="2025-10-31T05:45:10.886319677Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 05:45:10.887175 kubelet[2197]: E1031 05:45:10.887094 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 05:45:10.887397 kubelet[2197]: E1031 05:45:10.887194 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 05:45:10.888064 kubelet[2197]: E1031 05:45:10.887968 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9rn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6jdvb_calico-system(749c5f31-df45-44a4-9a60-d28a8f071a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 05:45:10.889343 kubelet[2197]: E1031 05:45:10.889257 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6jdvb" podUID="749c5f31-df45-44a4-9a60-d28a8f071a0b" Oct 31 05:45:14.226204 env[1308]: time="2025-10-31T05:45:14.225691137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 05:45:14.556843 env[1308]: time="2025-10-31T05:45:14.556751082Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:45:14.558288 env[1308]: time="2025-10-31T05:45:14.558220738Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 05:45:14.558672 kubelet[2197]: E1031 05:45:14.558607 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 05:45:14.559231 kubelet[2197]: E1031 05:45:14.559197 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 05:45:14.559629 kubelet[2197]: E1031 05:45:14.559555 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8668490f11434cfabca44ddf284789cf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tjkfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f447487f8-8md8h_calico-system(da74ef2c-d536-4fbc-9b28-ba72dfbbfc21): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 05:45:14.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.244.21.74:22-139.178.68.195:57182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:14.574381 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 05:45:14.574467 kernel: audit: type=1130 audit(1761889514.566:461): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.244.21.74:22-139.178.68.195:57182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:14.575054 env[1308]: time="2025-10-31T05:45:14.566957736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 05:45:14.567952 systemd[1]: Started sshd@11-10.244.21.74:22-139.178.68.195:57182.service. Oct 31 05:45:14.898292 env[1308]: time="2025-10-31T05:45:14.897383585Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:45:14.899172 env[1308]: time="2025-10-31T05:45:14.898999601Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 05:45:14.899401 kubelet[2197]: E1031 05:45:14.899327 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 05:45:14.899562 kubelet[2197]: E1031 05:45:14.899419 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 05:45:14.899777 kubelet[2197]: E1031 05:45:14.899686 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjkfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f447487f8-8md8h_calico-system(da74ef2c-d536-4fbc-9b28-ba72dfbbfc21): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 05:45:14.901845 kubelet[2197]: E1031 05:45:14.901345 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f447487f8-8md8h" podUID="da74ef2c-d536-4fbc-9b28-ba72dfbbfc21" Oct 31 05:45:15.498000 audit[4727]: USER_ACCT pid=4727 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:15.500480 sshd[4727]: Accepted publickey for core from 139.178.68.195 port 57182 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:45:15.507579 kernel: audit: type=1101 audit(1761889515.498:462): pid=4727 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:15.506000 audit[4727]: CRED_ACQ pid=4727 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:15.509268 sshd[4727]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:45:15.519955 kernel: audit: type=1103 audit(1761889515.506:463): pid=4727 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:15.520127 kernel: audit: type=1006 audit(1761889515.506:464): pid=4727 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Oct 31 05:45:15.506000 audit[4727]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff79cb5990 a2=3 a3=0 items=0 ppid=1 pid=4727 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:15.530327 kernel: audit: type=1300 audit(1761889515.506:464): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff79cb5990 a2=3 a3=0 items=0 ppid=1 pid=4727 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:15.531233 kernel: audit: type=1327 audit(1761889515.506:464): proctitle=737368643A20636F7265205B707269765D Oct 31 05:45:15.506000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 05:45:15.541169 systemd-logind[1296]: New session 12 of user core. Oct 31 05:45:15.542736 systemd[1]: Started session-12.scope. Oct 31 05:45:15.552000 audit[4727]: USER_START pid=4727 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:15.562636 kernel: audit: type=1105 audit(1761889515.552:465): pid=4727 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:15.561000 audit[4730]: CRED_ACQ pid=4730 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:15.570576 kernel: audit: type=1103 audit(1761889515.561:466): pid=4730 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:16.264639 sshd[4727]: pam_unix(sshd:session): session closed for user core Oct 31 05:45:16.264000 audit[4727]: USER_END pid=4727 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:16.274568 kernel: audit: type=1106 audit(1761889516.264:467): pid=4727 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:16.273000 audit[4727]: CRED_DISP pid=4727 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:16.277460 systemd[1]: sshd@11-10.244.21.74:22-139.178.68.195:57182.service: Deactivated successfully. Oct 31 05:45:16.278855 systemd[1]: session-12.scope: Deactivated successfully. Oct 31 05:45:16.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.244.21.74:22-139.178.68.195:57182 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:16.281986 kernel: audit: type=1104 audit(1761889516.273:468): pid=4727 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:16.281852 systemd-logind[1296]: Session 12 logged out. Waiting for processes to exit. Oct 31 05:45:16.283032 systemd-logind[1296]: Removed session 12. Oct 31 05:45:20.226351 kubelet[2197]: E1031 05:45:20.226264 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-dq5pc" podUID="ac96e24b-c0dd-48fd-838b-a540fa2a89c0" Oct 31 05:45:21.227753 env[1308]: time="2025-10-31T05:45:21.227696026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 05:45:21.409928 systemd[1]: Started sshd@12-10.244.21.74:22-139.178.68.195:57188.service. Oct 31 05:45:21.416154 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 05:45:21.416273 kernel: audit: type=1130 audit(1761889521.408:470): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.244.21.74:22-139.178.68.195:57188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:21.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.244.21.74:22-139.178.68.195:57188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:21.556195 env[1308]: time="2025-10-31T05:45:21.556114245Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:45:21.557954 env[1308]: time="2025-10-31T05:45:21.557873620Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 05:45:21.558473 kubelet[2197]: E1031 05:45:21.558406 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:45:21.559051 kubelet[2197]: E1031 05:45:21.558508 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:45:21.559051 kubelet[2197]: E1031 05:45:21.558866 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dr6kh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7ff9f49d5d-sjjrx_calico-apiserver(4433a427-a60f-4547-95ae-ea306784cb66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 05:45:21.560650 kubelet[2197]: E1031 05:45:21.560570 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-sjjrx" podUID="4433a427-a60f-4547-95ae-ea306784cb66" Oct 31 05:45:22.319000 audit[4742]: USER_ACCT pid=4742 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:22.322792 sshd[4742]: Accepted publickey for core from 139.178.68.195 port 57188 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:45:22.324000 audit[4742]: CRED_ACQ pid=4742 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:22.330072 sshd[4742]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:45:22.336167 kernel: audit: type=1101 audit(1761889522.319:471): pid=4742 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:22.336938 kernel: audit: type=1103 audit(1761889522.324:472): pid=4742 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:22.337007 kernel: audit: type=1006 audit(1761889522.324:473): pid=4742 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Oct 31 05:45:22.324000 audit[4742]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcbfd5fa60 a2=3 a3=0 items=0 ppid=1 pid=4742 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:22.345938 systemd[1]: Started session-13.scope. Oct 31 05:45:22.348377 kernel: audit: type=1300 audit(1761889522.324:473): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcbfd5fa60 a2=3 a3=0 items=0 ppid=1 pid=4742 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:22.348450 kernel: audit: type=1327 audit(1761889522.324:473): proctitle=737368643A20636F7265205B707269765D Oct 31 05:45:22.324000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 05:45:22.347726 systemd-logind[1296]: New session 13 of user core. Oct 31 05:45:22.355000 audit[4742]: USER_START pid=4742 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:22.366057 kernel: audit: type=1105 audit(1761889522.355:474): pid=4742 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:22.366000 audit[4745]: CRED_ACQ pid=4745 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:22.374666 kernel: audit: type=1103 audit(1761889522.366:475): pid=4745 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:23.069395 sshd[4742]: pam_unix(sshd:session): session closed for user core Oct 31 05:45:23.069000 audit[4742]: USER_END pid=4742 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:23.069000 audit[4742]: CRED_DISP pid=4742 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:23.084870 kernel: audit: type=1106 audit(1761889523.069:476): pid=4742 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:23.085005 kernel: audit: type=1104 audit(1761889523.069:477): pid=4742 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:23.085696 systemd[1]: sshd@12-10.244.21.74:22-139.178.68.195:57188.service: Deactivated successfully. Oct 31 05:45:23.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.244.21.74:22-139.178.68.195:57188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:23.087010 systemd[1]: session-13.scope: Deactivated successfully. Oct 31 05:45:23.087938 systemd-logind[1296]: Session 13 logged out. Waiting for processes to exit. Oct 31 05:45:23.089494 systemd-logind[1296]: Removed session 13. Oct 31 05:45:23.219196 systemd[1]: Started sshd@13-10.244.21.74:22-139.178.68.195:53076.service. Oct 31 05:45:23.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.244.21.74:22-139.178.68.195:53076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:23.236810 env[1308]: time="2025-10-31T05:45:23.234112008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 05:45:23.549289 env[1308]: time="2025-10-31T05:45:23.548949303Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:45:23.551968 env[1308]: time="2025-10-31T05:45:23.551719592Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 05:45:23.552186 kubelet[2197]: E1031 05:45:23.552115 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 05:45:23.552720 kubelet[2197]: E1031 05:45:23.552188 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 05:45:23.552720 kubelet[2197]: E1031 05:45:23.552374 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bn8m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-58d798db8c-5nl8j_calico-system(6f047019-b3ae-41f9-bdae-4d0664c67b92): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 05:45:23.553995 kubelet[2197]: E1031 05:45:23.553954 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-58d798db8c-5nl8j" podUID="6f047019-b3ae-41f9-bdae-4d0664c67b92" Oct 31 05:45:24.129000 audit[4756]: USER_ACCT pid=4756 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:24.131200 sshd[4756]: Accepted publickey for core from 139.178.68.195 port 53076 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:45:24.131000 audit[4756]: CRED_ACQ pid=4756 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:24.131000 audit[4756]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce87c5df0 a2=3 a3=0 items=0 ppid=1 pid=4756 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:24.131000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 05:45:24.133183 sshd[4756]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:45:24.143305 systemd-logind[1296]: New session 14 of user core. Oct 31 05:45:24.144959 systemd[1]: Started session-14.scope. Oct 31 05:45:24.158000 audit[4756]: USER_START pid=4756 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:24.161000 audit[4759]: CRED_ACQ pid=4759 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:24.226211 env[1308]: time="2025-10-31T05:45:24.226147500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 05:45:24.535712 env[1308]: time="2025-10-31T05:45:24.535603162Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:45:24.537328 env[1308]: time="2025-10-31T05:45:24.537234321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 05:45:24.537789 kubelet[2197]: E1031 05:45:24.537724 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 05:45:24.537914 kubelet[2197]: E1031 05:45:24.537807 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 05:45:24.538135 kubelet[2197]: E1031 05:45:24.538057 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8ck5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xdjq9_calico-system(2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 05:45:24.539740 kubelet[2197]: E1031 05:45:24.539692 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xdjq9" podUID="2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c" Oct 31 05:45:25.037331 sshd[4756]: pam_unix(sshd:session): session closed for user core Oct 31 05:45:25.038000 audit[4756]: USER_END pid=4756 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:25.038000 audit[4756]: CRED_DISP pid=4756 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:25.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.244.21.74:22-139.178.68.195:53076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:25.042445 systemd[1]: sshd@13-10.244.21.74:22-139.178.68.195:53076.service: Deactivated successfully. Oct 31 05:45:25.043680 systemd[1]: session-14.scope: Deactivated successfully. Oct 31 05:45:25.045876 systemd-logind[1296]: Session 14 logged out. Waiting for processes to exit. Oct 31 05:45:25.047404 systemd-logind[1296]: Removed session 14. Oct 31 05:45:25.183203 systemd[1]: Started sshd@14-10.244.21.74:22-139.178.68.195:53090.service. Oct 31 05:45:25.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.244.21.74:22-139.178.68.195:53090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:26.082000 audit[4767]: USER_ACCT pid=4767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:26.084821 sshd[4767]: Accepted publickey for core from 139.178.68.195 port 53090 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:45:26.086000 audit[4767]: CRED_ACQ pid=4767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:26.086000 audit[4767]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff7c5d4e70 a2=3 a3=0 items=0 ppid=1 pid=4767 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:26.086000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 05:45:26.089255 sshd[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:45:26.099053 systemd[1]: Started session-15.scope. Oct 31 05:45:26.099383 systemd-logind[1296]: New session 15 of user core. Oct 31 05:45:26.106000 audit[4767]: USER_START pid=4767 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:26.109000 audit[4770]: CRED_ACQ pid=4770 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:26.236423 kubelet[2197]: E1031 05:45:26.236341 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f447487f8-8md8h" podUID="da74ef2c-d536-4fbc-9b28-ba72dfbbfc21" Oct 31 05:45:26.237793 kubelet[2197]: E1031 05:45:26.237629 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6jdvb" podUID="749c5f31-df45-44a4-9a60-d28a8f071a0b" Oct 31 05:45:26.827607 sshd[4767]: pam_unix(sshd:session): session closed for user core Oct 31 05:45:26.840853 kernel: kauditd_printk_skb: 20 callbacks suppressed Oct 31 05:45:26.843756 kernel: audit: type=1106 audit(1761889526.829:494): pid=4767 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:26.829000 audit[4767]: USER_END pid=4767 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:26.842216 systemd[1]: sshd@14-10.244.21.74:22-139.178.68.195:53090.service: Deactivated successfully. Oct 31 05:45:26.845837 systemd[1]: session-15.scope: Deactivated successfully. Oct 31 05:45:26.846094 systemd-logind[1296]: Session 15 logged out. Waiting for processes to exit. Oct 31 05:45:26.849954 systemd-logind[1296]: Removed session 15. Oct 31 05:45:26.829000 audit[4767]: CRED_DISP pid=4767 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:26.857577 kernel: audit: type=1104 audit(1761889526.829:495): pid=4767 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:26.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.244.21.74:22-139.178.68.195:53090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:26.864616 kernel: audit: type=1131 audit(1761889526.841:496): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.244.21.74:22-139.178.68.195:53090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:27.612352 systemd[1]: run-containerd-runc-k8s.io-c0733a146273318292491838c0ff78b61e5f8f0a0a049924c2c2b81c5383735e-runc.Uz99oV.mount: Deactivated successfully. Oct 31 05:45:31.977707 systemd[1]: Started sshd@15-10.244.21.74:22-139.178.68.195:53100.service. Oct 31 05:45:31.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.244.21.74:22-139.178.68.195:53100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:31.989390 kernel: audit: type=1130 audit(1761889531.976:497): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.244.21.74:22-139.178.68.195:53100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:32.930000 audit[4807]: USER_ACCT pid=4807 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:32.935809 sshd[4807]: Accepted publickey for core from 139.178.68.195 port 53100 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:45:32.941000 audit[4807]: CRED_ACQ pid=4807 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:32.943572 sshd[4807]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:45:32.948781 kernel: audit: type=1101 audit(1761889532.930:498): pid=4807 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:32.948904 kernel: audit: type=1103 audit(1761889532.941:499): pid=4807 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:32.948975 kernel: audit: type=1006 audit(1761889532.941:500): pid=4807 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Oct 31 05:45:32.941000 audit[4807]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdbc021a60 a2=3 a3=0 items=0 ppid=1 pid=4807 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:32.960558 systemd[1]: Started session-16.scope. Oct 31 05:45:32.961967 kernel: audit: type=1300 audit(1761889532.941:500): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdbc021a60 a2=3 a3=0 items=0 ppid=1 pid=4807 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:32.961760 systemd-logind[1296]: New session 16 of user core. Oct 31 05:45:32.941000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 05:45:32.971248 kernel: audit: type=1327 audit(1761889532.941:500): proctitle=737368643A20636F7265205B707269765D Oct 31 05:45:32.972000 audit[4807]: USER_START pid=4807 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:32.983387 kernel: audit: type=1105 audit(1761889532.972:501): pid=4807 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:32.983516 kernel: audit: type=1103 audit(1761889532.981:502): pid=4810 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:32.981000 audit[4810]: CRED_ACQ pid=4810 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:33.666958 sshd[4807]: pam_unix(sshd:session): session closed for user core Oct 31 05:45:33.667000 audit[4807]: USER_END pid=4807 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:33.681046 kernel: audit: type=1106 audit(1761889533.667:503): pid=4807 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:33.682634 systemd[1]: sshd@15-10.244.21.74:22-139.178.68.195:53100.service: Deactivated successfully. Oct 31 05:45:33.694557 kernel: audit: type=1104 audit(1761889533.673:504): pid=4807 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:33.673000 audit[4807]: CRED_DISP pid=4807 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:33.686045 systemd-logind[1296]: Session 16 logged out. Waiting for processes to exit. Oct 31 05:45:33.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.244.21.74:22-139.178.68.195:53100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:33.688576 systemd[1]: session-16.scope: Deactivated successfully. Oct 31 05:45:33.689869 systemd-logind[1296]: Removed session 16. Oct 31 05:45:34.226742 kubelet[2197]: E1031 05:45:34.226671 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-dq5pc" podUID="ac96e24b-c0dd-48fd-838b-a540fa2a89c0" Oct 31 05:45:35.234712 kubelet[2197]: E1031 05:45:35.234276 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-sjjrx" podUID="4433a427-a60f-4547-95ae-ea306784cb66" Oct 31 05:45:36.654037 env[1308]: time="2025-10-31T05:45:36.653863025Z" level=info msg="StopPodSandbox for \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\"" Oct 31 05:45:36.952262 env[1308]: 2025-10-31 05:45:36.837 [WARNING][4830] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c", ResourceVersion:"1363", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c", Pod:"goldmane-666569f655-xdjq9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.24.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali225666ae0fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:45:36.952262 env[1308]: 2025-10-31 05:45:36.839 [INFO][4830] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Oct 31 05:45:36.952262 env[1308]: 2025-10-31 05:45:36.839 [INFO][4830] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" iface="eth0" netns="" Oct 31 05:45:36.952262 env[1308]: 2025-10-31 05:45:36.839 [INFO][4830] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Oct 31 05:45:36.952262 env[1308]: 2025-10-31 05:45:36.839 [INFO][4830] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Oct 31 05:45:36.952262 env[1308]: 2025-10-31 05:45:36.924 [INFO][4837] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" HandleID="k8s-pod-network.ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Workload="srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0" Oct 31 05:45:36.952262 env[1308]: 2025-10-31 05:45:36.925 [INFO][4837] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:45:36.952262 env[1308]: 2025-10-31 05:45:36.925 [INFO][4837] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:45:36.952262 env[1308]: 2025-10-31 05:45:36.944 [WARNING][4837] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" HandleID="k8s-pod-network.ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Workload="srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0" Oct 31 05:45:36.952262 env[1308]: 2025-10-31 05:45:36.944 [INFO][4837] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" HandleID="k8s-pod-network.ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Workload="srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0" Oct 31 05:45:36.952262 env[1308]: 2025-10-31 05:45:36.946 [INFO][4837] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:45:36.952262 env[1308]: 2025-10-31 05:45:36.949 [INFO][4830] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Oct 31 05:45:36.953746 env[1308]: time="2025-10-31T05:45:36.952216477Z" level=info msg="TearDown network for sandbox \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\" successfully" Oct 31 05:45:36.953746 env[1308]: time="2025-10-31T05:45:36.952459957Z" level=info msg="StopPodSandbox for \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\" returns successfully" Oct 31 05:45:36.953746 env[1308]: time="2025-10-31T05:45:36.953309322Z" level=info msg="RemovePodSandbox for \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\"" Oct 31 05:45:36.953746 env[1308]: time="2025-10-31T05:45:36.953367813Z" level=info msg="Forcibly stopping sandbox \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\"" Oct 31 05:45:37.065826 env[1308]: 2025-10-31 05:45:37.009 [WARNING][4852] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c", ResourceVersion:"1363", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"ea3ec7af8964d615a9efe31f808e8ba106691b91a353e0afbe5bdbc71424122c", Pod:"goldmane-666569f655-xdjq9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.24.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali225666ae0fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:45:37.065826 env[1308]: 2025-10-31 05:45:37.009 [INFO][4852] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Oct 31 05:45:37.065826 env[1308]: 2025-10-31 05:45:37.009 [INFO][4852] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" iface="eth0" netns="" Oct 31 05:45:37.065826 env[1308]: 2025-10-31 05:45:37.009 [INFO][4852] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Oct 31 05:45:37.065826 env[1308]: 2025-10-31 05:45:37.009 [INFO][4852] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Oct 31 05:45:37.065826 env[1308]: 2025-10-31 05:45:37.041 [INFO][4859] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" HandleID="k8s-pod-network.ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Workload="srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0" Oct 31 05:45:37.065826 env[1308]: 2025-10-31 05:45:37.042 [INFO][4859] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:45:37.065826 env[1308]: 2025-10-31 05:45:37.042 [INFO][4859] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:45:37.065826 env[1308]: 2025-10-31 05:45:37.058 [WARNING][4859] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" HandleID="k8s-pod-network.ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Workload="srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0" Oct 31 05:45:37.065826 env[1308]: 2025-10-31 05:45:37.058 [INFO][4859] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" HandleID="k8s-pod-network.ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Workload="srv--f2mor.gb1.brightbox.com-k8s-goldmane--666569f655--xdjq9-eth0" Oct 31 05:45:37.065826 env[1308]: 2025-10-31 05:45:37.060 [INFO][4859] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:45:37.065826 env[1308]: 2025-10-31 05:45:37.063 [INFO][4852] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02" Oct 31 05:45:37.067692 env[1308]: time="2025-10-31T05:45:37.065868030Z" level=info msg="TearDown network for sandbox \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\" successfully" Oct 31 05:45:37.071721 env[1308]: time="2025-10-31T05:45:37.071666837Z" level=info msg="RemovePodSandbox \"ac17d9db8b2b7577e392282af54f9c0bca0c914e9190af04309f09e203986a02\" returns successfully" Oct 31 05:45:37.072637 env[1308]: time="2025-10-31T05:45:37.072599499Z" level=info msg="StopPodSandbox for \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\"" Oct 31 05:45:37.187960 env[1308]: 2025-10-31 05:45:37.132 [WARNING][4873] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0", GenerateName:"calico-kube-controllers-58d798db8c-", Namespace:"calico-system", SelfLink:"", UID:"6f047019-b3ae-41f9-bdae-4d0664c67b92", ResourceVersion:"1356", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58d798db8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2", Pod:"calico-kube-controllers-58d798db8c-5nl8j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7611c19a5e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:45:37.187960 env[1308]: 2025-10-31 05:45:37.133 [INFO][4873] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Oct 31 05:45:37.187960 env[1308]: 2025-10-31 05:45:37.133 [INFO][4873] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" iface="eth0" netns="" Oct 31 05:45:37.187960 env[1308]: 2025-10-31 05:45:37.133 [INFO][4873] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Oct 31 05:45:37.187960 env[1308]: 2025-10-31 05:45:37.135 [INFO][4873] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Oct 31 05:45:37.187960 env[1308]: 2025-10-31 05:45:37.169 [INFO][4880] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" HandleID="k8s-pod-network.5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0" Oct 31 05:45:37.187960 env[1308]: 2025-10-31 05:45:37.170 [INFO][4880] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:45:37.187960 env[1308]: 2025-10-31 05:45:37.170 [INFO][4880] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:45:37.187960 env[1308]: 2025-10-31 05:45:37.180 [WARNING][4880] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" HandleID="k8s-pod-network.5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0" Oct 31 05:45:37.187960 env[1308]: 2025-10-31 05:45:37.180 [INFO][4880] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" HandleID="k8s-pod-network.5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0" Oct 31 05:45:37.187960 env[1308]: 2025-10-31 05:45:37.183 [INFO][4880] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:45:37.187960 env[1308]: 2025-10-31 05:45:37.185 [INFO][4873] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Oct 31 05:45:37.189623 env[1308]: time="2025-10-31T05:45:37.188601403Z" level=info msg="TearDown network for sandbox \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\" successfully" Oct 31 05:45:37.189623 env[1308]: time="2025-10-31T05:45:37.188649110Z" level=info msg="StopPodSandbox for \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\" returns successfully" Oct 31 05:45:37.189623 env[1308]: time="2025-10-31T05:45:37.189294596Z" level=info msg="RemovePodSandbox for \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\"" Oct 31 05:45:37.189623 env[1308]: time="2025-10-31T05:45:37.189338673Z" level=info msg="Forcibly stopping sandbox \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\"" Oct 31 05:45:37.248640 kubelet[2197]: E1031 05:45:37.242397 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-58d798db8c-5nl8j" podUID="6f047019-b3ae-41f9-bdae-4d0664c67b92" Oct 31 05:45:37.334692 env[1308]: 2025-10-31 05:45:37.268 [WARNING][4897] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0", GenerateName:"calico-kube-controllers-58d798db8c-", Namespace:"calico-system", SelfLink:"", UID:"6f047019-b3ae-41f9-bdae-4d0664c67b92", ResourceVersion:"1453", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58d798db8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"da3cbca0751ddfc19378d44f0cb856461343951506ec04d1a5054aa23de7b0e2", Pod:"calico-kube-controllers-58d798db8c-5nl8j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.24.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7611c19a5e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:45:37.334692 env[1308]: 2025-10-31 05:45:37.269 [INFO][4897] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Oct 31 05:45:37.334692 env[1308]: 2025-10-31 05:45:37.269 [INFO][4897] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" iface="eth0" netns="" Oct 31 05:45:37.334692 env[1308]: 2025-10-31 05:45:37.269 [INFO][4897] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Oct 31 05:45:37.334692 env[1308]: 2025-10-31 05:45:37.269 [INFO][4897] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Oct 31 05:45:37.334692 env[1308]: 2025-10-31 05:45:37.315 [INFO][4904] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" HandleID="k8s-pod-network.5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0" Oct 31 05:45:37.334692 env[1308]: 2025-10-31 05:45:37.315 [INFO][4904] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:45:37.334692 env[1308]: 2025-10-31 05:45:37.315 [INFO][4904] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:45:37.334692 env[1308]: 2025-10-31 05:45:37.327 [WARNING][4904] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" HandleID="k8s-pod-network.5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0" Oct 31 05:45:37.334692 env[1308]: 2025-10-31 05:45:37.327 [INFO][4904] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" HandleID="k8s-pod-network.5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--kube--controllers--58d798db8c--5nl8j-eth0" Oct 31 05:45:37.334692 env[1308]: 2025-10-31 05:45:37.330 [INFO][4904] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:45:37.334692 env[1308]: 2025-10-31 05:45:37.332 [INFO][4897] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1" Oct 31 05:45:37.335712 env[1308]: time="2025-10-31T05:45:37.334733162Z" level=info msg="TearDown network for sandbox \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\" successfully" Oct 31 05:45:37.339261 env[1308]: time="2025-10-31T05:45:37.339202352Z" level=info msg="RemovePodSandbox \"5c843f25733ec65a5738eeeafea359b822544057af00b36daf435531334d16f1\" returns successfully" Oct 31 05:45:37.339984 env[1308]: time="2025-10-31T05:45:37.339920991Z" level=info msg="StopPodSandbox for \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\"" Oct 31 05:45:37.437716 env[1308]: 2025-10-31 05:45:37.389 [WARNING][4918] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0", GenerateName:"calico-apiserver-7ff9f49d5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"4433a427-a60f-4547-95ae-ea306784cb66", ResourceVersion:"1448", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ff9f49d5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109", Pod:"calico-apiserver-7ff9f49d5d-sjjrx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b4edd9f57a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:45:37.437716 env[1308]: 2025-10-31 05:45:37.389 [INFO][4918] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Oct 31 05:45:37.437716 env[1308]: 2025-10-31 05:45:37.389 [INFO][4918] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" iface="eth0" netns="" Oct 31 05:45:37.437716 env[1308]: 2025-10-31 05:45:37.389 [INFO][4918] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Oct 31 05:45:37.437716 env[1308]: 2025-10-31 05:45:37.389 [INFO][4918] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Oct 31 05:45:37.437716 env[1308]: 2025-10-31 05:45:37.418 [INFO][4925] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" HandleID="k8s-pod-network.3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0" Oct 31 05:45:37.437716 env[1308]: 2025-10-31 05:45:37.418 [INFO][4925] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:45:37.437716 env[1308]: 2025-10-31 05:45:37.418 [INFO][4925] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:45:37.437716 env[1308]: 2025-10-31 05:45:37.429 [WARNING][4925] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" HandleID="k8s-pod-network.3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0" Oct 31 05:45:37.437716 env[1308]: 2025-10-31 05:45:37.429 [INFO][4925] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" HandleID="k8s-pod-network.3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0" Oct 31 05:45:37.437716 env[1308]: 2025-10-31 05:45:37.431 [INFO][4925] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:45:37.437716 env[1308]: 2025-10-31 05:45:37.433 [INFO][4918] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Oct 31 05:45:37.439517 env[1308]: time="2025-10-31T05:45:37.437751279Z" level=info msg="TearDown network for sandbox \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\" successfully" Oct 31 05:45:37.439517 env[1308]: time="2025-10-31T05:45:37.437793940Z" level=info msg="StopPodSandbox for \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\" returns successfully" Oct 31 05:45:37.439517 env[1308]: time="2025-10-31T05:45:37.438362936Z" level=info msg="RemovePodSandbox for \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\"" Oct 31 05:45:37.439517 env[1308]: time="2025-10-31T05:45:37.438448277Z" level=info msg="Forcibly stopping sandbox \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\"" Oct 31 05:45:37.544643 env[1308]: 2025-10-31 05:45:37.492 [WARNING][4940] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0", GenerateName:"calico-apiserver-7ff9f49d5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"4433a427-a60f-4547-95ae-ea306784cb66", ResourceVersion:"1448", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 5, 43, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ff9f49d5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-f2mor.gb1.brightbox.com", ContainerID:"0103e5d65872250c76158990793100cdddd0f3d160c475fa764f95212cd24109", Pod:"calico-apiserver-7ff9f49d5d-sjjrx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.24.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6b4edd9f57a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 05:45:37.544643 env[1308]: 2025-10-31 05:45:37.493 [INFO][4940] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Oct 31 05:45:37.544643 env[1308]: 2025-10-31 05:45:37.493 [INFO][4940] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" iface="eth0" netns="" Oct 31 05:45:37.544643 env[1308]: 2025-10-31 05:45:37.493 [INFO][4940] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Oct 31 05:45:37.544643 env[1308]: 2025-10-31 05:45:37.493 [INFO][4940] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Oct 31 05:45:37.544643 env[1308]: 2025-10-31 05:45:37.528 [INFO][4947] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" HandleID="k8s-pod-network.3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0" Oct 31 05:45:37.544643 env[1308]: 2025-10-31 05:45:37.528 [INFO][4947] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 05:45:37.544643 env[1308]: 2025-10-31 05:45:37.528 [INFO][4947] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 05:45:37.544643 env[1308]: 2025-10-31 05:45:37.538 [WARNING][4947] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" HandleID="k8s-pod-network.3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0" Oct 31 05:45:37.544643 env[1308]: 2025-10-31 05:45:37.538 [INFO][4947] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" HandleID="k8s-pod-network.3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Workload="srv--f2mor.gb1.brightbox.com-k8s-calico--apiserver--7ff9f49d5d--sjjrx-eth0" Oct 31 05:45:37.544643 env[1308]: 2025-10-31 05:45:37.540 [INFO][4947] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 05:45:37.544643 env[1308]: 2025-10-31 05:45:37.542 [INFO][4940] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4" Oct 31 05:45:37.547120 env[1308]: time="2025-10-31T05:45:37.546877825Z" level=info msg="TearDown network for sandbox \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\" successfully" Oct 31 05:45:37.551016 env[1308]: time="2025-10-31T05:45:37.550950343Z" level=info msg="RemovePodSandbox \"3f82d4ef725443724aac8b1aaaf5a728b03743c370ca66ef40669bfa53f922d4\" returns successfully" Oct 31 05:45:38.225741 kubelet[2197]: E1031 05:45:38.225676 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6jdvb" podUID="749c5f31-df45-44a4-9a60-d28a8f071a0b" Oct 31 05:45:38.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.244.21.74:22-139.178.68.195:50998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:38.821840 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 05:45:38.821953 kernel: audit: type=1130 audit(1761889538.813:506): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.244.21.74:22-139.178.68.195:50998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:38.814458 systemd[1]: Started sshd@16-10.244.21.74:22-139.178.68.195:50998.service. Oct 31 05:45:39.225589 kubelet[2197]: E1031 05:45:39.225409 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xdjq9" podUID="2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c" Oct 31 05:45:39.771489 sshd[4953]: Accepted publickey for core from 139.178.68.195 port 50998 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:45:39.782588 kernel: audit: type=1101 audit(1761889539.769:507): pid=4953 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:39.769000 audit[4953]: USER_ACCT pid=4953 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:39.782000 audit[4953]: CRED_ACQ pid=4953 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:39.791476 sshd[4953]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:45:39.795291 kernel: audit: type=1103 audit(1761889539.782:508): pid=4953 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:39.795404 kernel: audit: type=1006 audit(1761889539.782:509): pid=4953 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Oct 31 05:45:39.782000 audit[4953]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff58197160 a2=3 a3=0 items=0 ppid=1 pid=4953 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:39.803576 kernel: audit: type=1300 audit(1761889539.782:509): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff58197160 a2=3 a3=0 items=0 ppid=1 pid=4953 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:39.782000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 05:45:39.807618 kernel: audit: type=1327 audit(1761889539.782:509): proctitle=737368643A20636F7265205B707269765D Oct 31 05:45:39.814627 systemd-logind[1296]: New session 17 of user core. Oct 31 05:45:39.816075 systemd[1]: Started session-17.scope. Oct 31 05:45:39.823000 audit[4953]: USER_START pid=4953 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:39.834837 kernel: audit: type=1105 audit(1761889539.823:510): pid=4953 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:39.834000 audit[4956]: CRED_ACQ pid=4956 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:39.843587 kernel: audit: type=1103 audit(1761889539.834:511): pid=4956 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:40.227324 kubelet[2197]: E1031 05:45:40.227136 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f447487f8-8md8h" podUID="da74ef2c-d536-4fbc-9b28-ba72dfbbfc21" Oct 31 05:45:40.565630 sshd[4953]: pam_unix(sshd:session): session closed for user core Oct 31 05:45:40.568000 audit[4953]: USER_END pid=4953 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:40.577559 kernel: audit: type=1106 audit(1761889540.568:512): pid=4953 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:40.578082 systemd[1]: sshd@16-10.244.21.74:22-139.178.68.195:50998.service: Deactivated successfully. Oct 31 05:45:40.580159 systemd[1]: session-17.scope: Deactivated successfully. Oct 31 05:45:40.581826 systemd-logind[1296]: Session 17 logged out. Waiting for processes to exit. Oct 31 05:45:40.583250 systemd-logind[1296]: Removed session 17. Oct 31 05:45:40.568000 audit[4953]: CRED_DISP pid=4953 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:40.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.244.21.74:22-139.178.68.195:50998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:40.590573 kernel: audit: type=1104 audit(1761889540.568:513): pid=4953 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:45.225808 kubelet[2197]: E1031 05:45:45.225743 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-dq5pc" podUID="ac96e24b-c0dd-48fd-838b-a540fa2a89c0" Oct 31 05:45:45.712395 systemd[1]: Started sshd@17-10.244.21.74:22-139.178.68.195:55742.service. Oct 31 05:45:45.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.244.21.74:22-139.178.68.195:55742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:45.731086 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 05:45:45.731230 kernel: audit: type=1130 audit(1761889545.711:515): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.244.21.74:22-139.178.68.195:55742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:46.224776 kubelet[2197]: E1031 05:45:46.224717 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-sjjrx" podUID="4433a427-a60f-4547-95ae-ea306784cb66" Oct 31 05:45:46.611000 audit[4969]: USER_ACCT pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:46.615673 sshd[4969]: Accepted publickey for core from 139.178.68.195 port 55742 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:45:46.622570 kernel: audit: type=1101 audit(1761889546.611:516): pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:46.622000 audit[4969]: CRED_ACQ pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:46.631836 kernel: audit: type=1103 audit(1761889546.622:517): pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:46.631645 sshd[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:45:46.622000 audit[4969]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc61718c60 a2=3 a3=0 items=0 ppid=1 pid=4969 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:46.637633 kernel: audit: type=1006 audit(1761889546.622:518): pid=4969 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Oct 31 05:45:46.637712 kernel: audit: type=1300 audit(1761889546.622:518): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc61718c60 a2=3 a3=0 items=0 ppid=1 pid=4969 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:46.644827 kernel: audit: type=1327 audit(1761889546.622:518): proctitle=737368643A20636F7265205B707269765D Oct 31 05:45:46.622000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 05:45:46.650043 systemd-logind[1296]: New session 18 of user core. Oct 31 05:45:46.652835 systemd[1]: Started session-18.scope. Oct 31 05:45:46.660000 audit[4969]: USER_START pid=4969 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:46.671567 kernel: audit: type=1105 audit(1761889546.660:519): pid=4969 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:46.671000 audit[4972]: CRED_ACQ pid=4972 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:46.679613 kernel: audit: type=1103 audit(1761889546.671:520): pid=4972 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:47.359917 sshd[4969]: pam_unix(sshd:session): session closed for user core Oct 31 05:45:47.369719 kernel: audit: type=1106 audit(1761889547.359:521): pid=4969 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:47.359000 audit[4969]: USER_END pid=4969 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:47.360000 audit[4969]: CRED_DISP pid=4969 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:47.364081 systemd-logind[1296]: Session 18 logged out. Waiting for processes to exit. Oct 31 05:45:47.365450 systemd[1]: sshd@17-10.244.21.74:22-139.178.68.195:55742.service: Deactivated successfully. Oct 31 05:45:47.366702 systemd[1]: session-18.scope: Deactivated successfully. Oct 31 05:45:47.368949 systemd-logind[1296]: Removed session 18. Oct 31 05:45:47.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.244.21.74:22-139.178.68.195:55742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:47.376568 kernel: audit: type=1104 audit(1761889547.360:522): pid=4969 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:49.227185 kubelet[2197]: E1031 05:45:49.227112 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6jdvb" podUID="749c5f31-df45-44a4-9a60-d28a8f071a0b" Oct 31 05:45:50.225865 kubelet[2197]: E1031 05:45:50.225790 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xdjq9" podUID="2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c" Oct 31 05:45:50.226230 kubelet[2197]: E1031 05:45:50.226147 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-58d798db8c-5nl8j" podUID="6f047019-b3ae-41f9-bdae-4d0664c67b92" Oct 31 05:45:52.508605 systemd[1]: Started sshd@18-10.244.21.74:22-139.178.68.195:55750.service. Oct 31 05:45:52.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.244.21.74:22-139.178.68.195:55750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:52.513880 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 05:45:52.514031 kernel: audit: type=1130 audit(1761889552.507:524): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.244.21.74:22-139.178.68.195:55750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:53.406000 audit[4987]: USER_ACCT pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:53.408021 sshd[4987]: Accepted publickey for core from 139.178.68.195 port 55750 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:45:53.414563 kernel: audit: type=1101 audit(1761889553.406:525): pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:53.414000 audit[4987]: CRED_ACQ pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:53.416974 sshd[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:45:53.426636 kernel: audit: type=1103 audit(1761889553.414:526): pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:53.427325 kernel: audit: type=1006 audit(1761889553.414:527): pid=4987 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Oct 31 05:45:53.414000 audit[4987]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9d76e610 a2=3 a3=0 items=0 ppid=1 pid=4987 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:53.441564 kernel: audit: type=1300 audit(1761889553.414:527): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9d76e610 a2=3 a3=0 items=0 ppid=1 pid=4987 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:53.441681 kernel: audit: type=1327 audit(1761889553.414:527): proctitle=737368643A20636F7265205B707269765D Oct 31 05:45:53.414000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 05:45:53.444446 systemd-logind[1296]: New session 19 of user core. Oct 31 05:45:53.446003 systemd[1]: Started session-19.scope. Oct 31 05:45:53.454000 audit[4987]: USER_START pid=4987 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:53.458000 audit[4990]: CRED_ACQ pid=4990 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:53.470148 kernel: audit: type=1105 audit(1761889553.454:528): pid=4987 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:53.470262 kernel: audit: type=1103 audit(1761889553.458:529): pid=4990 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:54.180597 sshd[4987]: pam_unix(sshd:session): session closed for user core Oct 31 05:45:54.181000 audit[4987]: USER_END pid=4987 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:54.185035 systemd[1]: sshd@18-10.244.21.74:22-139.178.68.195:55750.service: Deactivated successfully. Oct 31 05:45:54.186405 systemd[1]: session-19.scope: Deactivated successfully. Oct 31 05:45:54.190561 kernel: audit: type=1106 audit(1761889554.181:530): pid=4987 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:54.191432 systemd-logind[1296]: Session 19 logged out. Waiting for processes to exit. Oct 31 05:45:54.192803 systemd-logind[1296]: Removed session 19. Oct 31 05:45:54.181000 audit[4987]: CRED_DISP pid=4987 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:54.206582 kernel: audit: type=1104 audit(1761889554.181:531): pid=4987 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:54.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.244.21.74:22-139.178.68.195:55750 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:54.227042 kubelet[2197]: E1031 05:45:54.226980 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f447487f8-8md8h" podUID="da74ef2c-d536-4fbc-9b28-ba72dfbbfc21" Oct 31 05:45:54.326779 systemd[1]: Started sshd@19-10.244.21.74:22-139.178.68.195:52996.service. Oct 31 05:45:54.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.244.21.74:22-139.178.68.195:52996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:55.219000 audit[5000]: USER_ACCT pid=5000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:55.221622 sshd[5000]: Accepted publickey for core from 139.178.68.195 port 52996 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:45:55.221000 audit[5000]: CRED_ACQ pid=5000 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:55.221000 audit[5000]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9b4d7cc0 a2=3 a3=0 items=0 ppid=1 pid=5000 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:55.221000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 05:45:55.226315 sshd[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:45:55.233610 systemd-logind[1296]: New session 20 of user core. Oct 31 05:45:55.235172 systemd[1]: Started session-20.scope. Oct 31 05:45:55.246000 audit[5000]: USER_START pid=5000 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:55.249000 audit[5003]: CRED_ACQ pid=5003 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:56.240379 env[1308]: time="2025-10-31T05:45:56.239841635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 05:45:56.380940 sshd[5000]: pam_unix(sshd:session): session closed for user core Oct 31 05:45:56.381000 audit[5000]: USER_END pid=5000 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:56.381000 audit[5000]: CRED_DISP pid=5000 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:56.384687 systemd[1]: sshd@19-10.244.21.74:22-139.178.68.195:52996.service: Deactivated successfully. Oct 31 05:45:56.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.244.21.74:22-139.178.68.195:52996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:56.386299 systemd[1]: session-20.scope: Deactivated successfully. Oct 31 05:45:56.386344 systemd-logind[1296]: Session 20 logged out. Waiting for processes to exit. Oct 31 05:45:56.389440 systemd-logind[1296]: Removed session 20. Oct 31 05:45:56.529044 systemd[1]: Started sshd@20-10.244.21.74:22-139.178.68.195:53006.service. Oct 31 05:45:56.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.244.21.74:22-139.178.68.195:53006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:56.549948 env[1308]: time="2025-10-31T05:45:56.549842974Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:45:56.551354 env[1308]: time="2025-10-31T05:45:56.551275350Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 05:45:56.557449 kubelet[2197]: E1031 05:45:56.557337 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:45:56.559099 kubelet[2197]: E1031 05:45:56.559025 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:45:56.561762 kubelet[2197]: E1031 05:45:56.561657 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v6crh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7ff9f49d5d-dq5pc_calico-apiserver(ac96e24b-c0dd-48fd-838b-a540fa2a89c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 05:45:56.564563 kubelet[2197]: E1031 05:45:56.564499 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-dq5pc" podUID="ac96e24b-c0dd-48fd-838b-a540fa2a89c0" Oct 31 05:45:57.443000 audit[5011]: USER_ACCT pid=5011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:57.444959 sshd[5011]: Accepted publickey for core from 139.178.68.195 port 53006 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:45:57.444000 audit[5011]: CRED_ACQ pid=5011 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:57.444000 audit[5011]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcdb3c3520 a2=3 a3=0 items=0 ppid=1 pid=5011 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:57.444000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 05:45:57.447273 sshd[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:45:57.454485 systemd-logind[1296]: New session 21 of user core. Oct 31 05:45:57.455387 systemd[1]: Started session-21.scope. Oct 31 05:45:57.462000 audit[5011]: USER_START pid=5011 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:57.465000 audit[5014]: CRED_ACQ pid=5014 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:57.609416 systemd[1]: run-containerd-runc-k8s.io-c0733a146273318292491838c0ff78b61e5f8f0a0a049924c2c2b81c5383735e-runc.IQbPKH.mount: Deactivated successfully. Oct 31 05:45:59.112778 kernel: kauditd_printk_skb: 20 callbacks suppressed Oct 31 05:45:59.113296 kernel: audit: type=1325 audit(1761889559.105:548): table=filter:126 family=2 entries=26 op=nft_register_rule pid=5046 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:45:59.105000 audit[5046]: NETFILTER_CFG table=filter:126 family=2 entries=26 op=nft_register_rule pid=5046 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:45:59.105000 audit[5046]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffea46e3940 a2=0 a3=7ffea46e392c items=0 ppid=2302 pid=5046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:59.105000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:45:59.128911 kernel: audit: type=1300 audit(1761889559.105:548): arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffea46e3940 a2=0 a3=7ffea46e392c items=0 ppid=2302 pid=5046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:59.129018 kernel: audit: type=1327 audit(1761889559.105:548): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:45:59.153000 audit[5046]: NETFILTER_CFG table=nat:127 family=2 entries=20 op=nft_register_rule pid=5046 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:45:59.159563 kernel: audit: type=1325 audit(1761889559.153:549): table=nat:127 family=2 entries=20 op=nft_register_rule pid=5046 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:45:59.153000 audit[5046]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffea46e3940 a2=0 a3=0 items=0 ppid=2302 pid=5046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:59.168567 kernel: audit: type=1300 audit(1761889559.153:549): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffea46e3940 a2=0 a3=0 items=0 ppid=2302 pid=5046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:59.153000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:45:59.173572 kernel: audit: type=1327 audit(1761889559.153:549): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:45:59.184373 sshd[5011]: pam_unix(sshd:session): session closed for user core Oct 31 05:45:59.195000 audit[5011]: USER_END pid=5011 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:59.201142 systemd[1]: sshd@20-10.244.21.74:22-139.178.68.195:53006.service: Deactivated successfully. Oct 31 05:45:59.203683 systemd[1]: session-21.scope: Deactivated successfully. Oct 31 05:45:59.204558 kernel: audit: type=1106 audit(1761889559.195:550): pid=5011 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:59.196000 audit[5011]: CRED_DISP pid=5011 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:59.212416 systemd-logind[1296]: Session 21 logged out. Waiting for processes to exit. Oct 31 05:45:59.212962 kernel: audit: type=1104 audit(1761889559.196:551): pid=5011 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:45:59.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.244.21.74:22-139.178.68.195:53006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:59.220580 kernel: audit: type=1131 audit(1761889559.200:552): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.244.21.74:22-139.178.68.195:53006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:45:59.220359 systemd-logind[1296]: Removed session 21. Oct 31 05:45:59.206000 audit[5048]: NETFILTER_CFG table=filter:128 family=2 entries=38 op=nft_register_rule pid=5048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:45:59.225556 kernel: audit: type=1325 audit(1761889559.206:553): table=filter:128 family=2 entries=38 op=nft_register_rule pid=5048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:45:59.206000 audit[5048]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc20c883d0 a2=0 a3=7ffc20c883bc items=0 ppid=2302 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:59.206000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:45:59.221000 audit[5048]: NETFILTER_CFG table=nat:129 family=2 entries=20 op=nft_register_rule pid=5048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:45:59.221000 audit[5048]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc20c883d0 a2=0 a3=0 items=0 ppid=2302 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:45:59.221000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:45:59.241838 kubelet[2197]: E1031 05:45:59.241781 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-sjjrx" podUID="4433a427-a60f-4547-95ae-ea306784cb66" Oct 31 05:45:59.325417 systemd[1]: Started sshd@21-10.244.21.74:22-139.178.68.195:53016.service. Oct 31 05:45:59.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.244.21.74:22-139.178.68.195:53016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:46:00.234567 sshd[5051]: Accepted publickey for core from 139.178.68.195 port 53016 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:46:00.232000 audit[5051]: USER_ACCT pid=5051 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:00.234000 audit[5051]: CRED_ACQ pid=5051 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:00.234000 audit[5051]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd8e4a2f0 a2=3 a3=0 items=0 ppid=1 pid=5051 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:46:00.234000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 05:46:00.237292 sshd[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:46:00.244730 systemd-logind[1296]: New session 22 of user core. Oct 31 05:46:00.245637 systemd[1]: Started session-22.scope. Oct 31 05:46:00.253000 audit[5051]: USER_START pid=5051 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:00.258000 audit[5056]: CRED_ACQ pid=5056 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:01.235140 kubelet[2197]: E1031 05:46:01.235045 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xdjq9" podUID="2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c" Oct 31 05:46:01.308341 sshd[5051]: pam_unix(sshd:session): session closed for user core Oct 31 05:46:01.308000 audit[5051]: USER_END pid=5051 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:01.308000 audit[5051]: CRED_DISP pid=5051 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:01.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.244.21.74:22-139.178.68.195:53016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:46:01.317449 systemd[1]: sshd@21-10.244.21.74:22-139.178.68.195:53016.service: Deactivated successfully. Oct 31 05:46:01.320465 systemd[1]: session-22.scope: Deactivated successfully. Oct 31 05:46:01.320484 systemd-logind[1296]: Session 22 logged out. Waiting for processes to exit. Oct 31 05:46:01.326136 systemd-logind[1296]: Removed session 22. Oct 31 05:46:01.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.244.21.74:22-139.178.68.195:53020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:46:01.457889 systemd[1]: Started sshd@22-10.244.21.74:22-139.178.68.195:53020.service. Oct 31 05:46:02.425000 audit[5064]: USER_ACCT pid=5064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:02.429317 sshd[5064]: Accepted publickey for core from 139.178.68.195 port 53020 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:46:02.427000 audit[5064]: CRED_ACQ pid=5064 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:02.427000 audit[5064]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffed17f4ff0 a2=3 a3=0 items=0 ppid=1 pid=5064 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:46:02.427000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 05:46:02.430196 sshd[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:46:02.438152 systemd-logind[1296]: New session 23 of user core. Oct 31 05:46:02.440676 systemd[1]: Started session-23.scope. Oct 31 05:46:02.448000 audit[5064]: USER_START pid=5064 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:02.452000 audit[5067]: CRED_ACQ pid=5067 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:03.250568 env[1308]: time="2025-10-31T05:46:03.250076415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 05:46:03.255642 kubelet[2197]: E1031 05:46:03.255572 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-58d798db8c-5nl8j" podUID="6f047019-b3ae-41f9-bdae-4d0664c67b92" Oct 31 05:46:03.347363 sshd[5064]: pam_unix(sshd:session): session closed for user core Oct 31 05:46:03.348000 audit[5064]: USER_END pid=5064 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:03.348000 audit[5064]: CRED_DISP pid=5064 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:03.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.244.21.74:22-139.178.68.195:53020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:46:03.353054 systemd[1]: sshd@22-10.244.21.74:22-139.178.68.195:53020.service: Deactivated successfully. Oct 31 05:46:03.355822 systemd[1]: session-23.scope: Deactivated successfully. Oct 31 05:46:03.356274 systemd-logind[1296]: Session 23 logged out. Waiting for processes to exit. Oct 31 05:46:03.358179 systemd-logind[1296]: Removed session 23. Oct 31 05:46:03.554675 env[1308]: time="2025-10-31T05:46:03.554276880Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:46:03.556142 env[1308]: time="2025-10-31T05:46:03.555897700Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 05:46:03.559584 kubelet[2197]: E1031 05:46:03.559507 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 05:46:03.560883 kubelet[2197]: E1031 05:46:03.559604 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 05:46:03.561333 kubelet[2197]: E1031 05:46:03.561193 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9rn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6jdvb_calico-system(749c5f31-df45-44a4-9a60-d28a8f071a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 05:46:03.563720 env[1308]: time="2025-10-31T05:46:03.563446621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 05:46:03.878353 env[1308]: time="2025-10-31T05:46:03.878186565Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:46:03.879868 env[1308]: time="2025-10-31T05:46:03.879791296Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 05:46:03.880343 kubelet[2197]: E1031 05:46:03.880288 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 05:46:03.880511 kubelet[2197]: E1031 05:46:03.880476 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 05:46:03.880898 kubelet[2197]: E1031 05:46:03.880826 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9rn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-6jdvb_calico-system(749c5f31-df45-44a4-9a60-d28a8f071a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 05:46:03.882327 kubelet[2197]: E1031 05:46:03.882271 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6jdvb" podUID="749c5f31-df45-44a4-9a60-d28a8f071a0b" Oct 31 05:46:05.234718 env[1308]: time="2025-10-31T05:46:05.234325003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 05:46:05.539014 env[1308]: time="2025-10-31T05:46:05.538937698Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:46:05.540812 env[1308]: time="2025-10-31T05:46:05.540753479Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 05:46:05.541303 kubelet[2197]: E1031 05:46:05.541239 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 05:46:05.541866 kubelet[2197]: E1031 05:46:05.541321 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 05:46:05.542047 kubelet[2197]: E1031 05:46:05.541974 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8668490f11434cfabca44ddf284789cf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tjkfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f447487f8-8md8h_calico-system(da74ef2c-d536-4fbc-9b28-ba72dfbbfc21): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 05:46:05.544442 env[1308]: time="2025-10-31T05:46:05.544400900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 05:46:05.856394 env[1308]: time="2025-10-31T05:46:05.856157876Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:46:05.858246 env[1308]: time="2025-10-31T05:46:05.858182733Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 05:46:05.858922 kubelet[2197]: E1031 05:46:05.858819 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 05:46:05.859189 kubelet[2197]: E1031 05:46:05.859136 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 05:46:05.861764 kubelet[2197]: E1031 05:46:05.861665 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tjkfs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f447487f8-8md8h_calico-system(da74ef2c-d536-4fbc-9b28-ba72dfbbfc21): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 05:46:05.863389 kubelet[2197]: E1031 05:46:05.863322 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f447487f8-8md8h" podUID="da74ef2c-d536-4fbc-9b28-ba72dfbbfc21" Oct 31 05:46:08.500596 kernel: kauditd_printk_skb: 27 callbacks suppressed Oct 31 05:46:08.500877 kernel: audit: type=1130 audit(1761889568.492:573): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.244.21.74:22-139.178.68.195:42254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:46:08.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.244.21.74:22-139.178.68.195:42254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:46:08.493914 systemd[1]: Started sshd@23-10.244.21.74:22-139.178.68.195:42254.service. Oct 31 05:46:09.449027 sshd[5097]: Accepted publickey for core from 139.178.68.195 port 42254 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:46:09.447000 audit[5097]: USER_ACCT pid=5097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:09.461568 kernel: audit: type=1101 audit(1761889569.447:574): pid=5097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:09.460000 audit[5097]: CRED_ACQ pid=5097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:09.473190 kernel: audit: type=1103 audit(1761889569.460:575): pid=5097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:09.473615 kernel: audit: type=1006 audit(1761889569.460:576): pid=5097 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Oct 31 05:46:09.473694 kernel: audit: type=1300 audit(1761889569.460:576): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9a378040 a2=3 a3=0 items=0 ppid=1 pid=5097 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:46:09.460000 audit[5097]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9a378040 a2=3 a3=0 items=0 ppid=1 pid=5097 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:46:09.460000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 05:46:09.482794 kernel: audit: type=1327 audit(1761889569.460:576): proctitle=737368643A20636F7265205B707269765D Oct 31 05:46:09.483214 sshd[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:46:09.496366 systemd-logind[1296]: New session 24 of user core. Oct 31 05:46:09.496452 systemd[1]: Started session-24.scope. Oct 31 05:46:09.510000 audit[5097]: USER_START pid=5097 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:09.520679 kernel: audit: type=1105 audit(1761889569.510:577): pid=5097 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:09.510000 audit[5100]: CRED_ACQ pid=5100 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:09.531621 kernel: audit: type=1103 audit(1761889569.510:578): pid=5100 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:10.228450 kubelet[2197]: E1031 05:46:10.228332 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-dq5pc" podUID="ac96e24b-c0dd-48fd-838b-a540fa2a89c0" Oct 31 05:46:10.359936 kernel: audit: type=1325 audit(1761889570.334:579): table=filter:130 family=2 entries=26 op=nft_register_rule pid=5108 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:46:10.334000 audit[5108]: NETFILTER_CFG table=filter:130 family=2 entries=26 op=nft_register_rule pid=5108 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:46:10.375621 kernel: audit: type=1300 audit(1761889570.334:579): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc46e3dfb0 a2=0 a3=7ffc46e3df9c items=0 ppid=2302 pid=5108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:46:10.334000 audit[5108]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc46e3dfb0 a2=0 a3=7ffc46e3df9c items=0 ppid=2302 pid=5108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:46:10.334000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:46:10.401000 audit[5108]: NETFILTER_CFG table=nat:131 family=2 entries=104 op=nft_register_chain pid=5108 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 31 05:46:10.401000 audit[5108]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc46e3dfb0 a2=0 a3=7ffc46e3df9c items=0 ppid=2302 pid=5108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:46:10.401000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 31 05:46:10.550121 sshd[5097]: pam_unix(sshd:session): session closed for user core Oct 31 05:46:10.551000 audit[5097]: USER_END pid=5097 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:10.551000 audit[5097]: CRED_DISP pid=5097 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:10.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.244.21.74:22-139.178.68.195:42254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:46:10.556461 systemd[1]: sshd@23-10.244.21.74:22-139.178.68.195:42254.service: Deactivated successfully. Oct 31 05:46:10.558875 systemd-logind[1296]: Session 24 logged out. Waiting for processes to exit. Oct 31 05:46:10.560003 systemd[1]: session-24.scope: Deactivated successfully. Oct 31 05:46:10.561882 systemd-logind[1296]: Removed session 24. Oct 31 05:46:12.227984 env[1308]: time="2025-10-31T05:46:12.227851289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 05:46:12.544282 env[1308]: time="2025-10-31T05:46:12.543844009Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:46:12.545671 env[1308]: time="2025-10-31T05:46:12.545463405Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 05:46:12.546150 kubelet[2197]: E1031 05:46:12.546046 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:46:12.546760 kubelet[2197]: E1031 05:46:12.546192 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 05:46:12.546760 kubelet[2197]: E1031 05:46:12.546574 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dr6kh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7ff9f49d5d-sjjrx_calico-apiserver(4433a427-a60f-4547-95ae-ea306784cb66): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 05:46:12.548288 kubelet[2197]: E1031 05:46:12.548246 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-sjjrx" podUID="4433a427-a60f-4547-95ae-ea306784cb66" Oct 31 05:46:13.228637 env[1308]: time="2025-10-31T05:46:13.228508063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 05:46:13.534380 env[1308]: time="2025-10-31T05:46:13.534004452Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:46:13.535783 env[1308]: time="2025-10-31T05:46:13.535600421Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 05:46:13.536115 kubelet[2197]: E1031 05:46:13.536045 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 05:46:13.536256 kubelet[2197]: E1031 05:46:13.536143 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 05:46:13.536482 kubelet[2197]: E1031 05:46:13.536409 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8ck5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-xdjq9_calico-system(2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 05:46:13.538241 kubelet[2197]: E1031 05:46:13.538189 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xdjq9" podUID="2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c" Oct 31 05:46:15.233473 kubelet[2197]: E1031 05:46:15.233345 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-6jdvb" podUID="749c5f31-df45-44a4-9a60-d28a8f071a0b" Oct 31 05:46:15.700477 systemd[1]: Started sshd@24-10.244.21.74:22-139.178.68.195:35894.service. Oct 31 05:46:15.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.244.21.74:22-139.178.68.195:35894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:46:15.704992 kernel: kauditd_printk_skb: 7 callbacks suppressed Oct 31 05:46:15.705151 kernel: audit: type=1130 audit(1761889575.700:584): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.244.21.74:22-139.178.68.195:35894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:46:16.226195 env[1308]: time="2025-10-31T05:46:16.225833997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 05:46:16.534545 env[1308]: time="2025-10-31T05:46:16.534445072Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 05:46:16.537743 env[1308]: time="2025-10-31T05:46:16.537618472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 05:46:16.538034 kubelet[2197]: E1031 05:46:16.537963 2197 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 05:46:16.538591 kubelet[2197]: E1031 05:46:16.538050 2197 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 05:46:16.538591 kubelet[2197]: E1031 05:46:16.538238 2197 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bn8m5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-58d798db8c-5nl8j_calico-system(6f047019-b3ae-41f9-bdae-4d0664c67b92): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 05:46:16.540043 kubelet[2197]: E1031 05:46:16.539990 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-58d798db8c-5nl8j" podUID="6f047019-b3ae-41f9-bdae-4d0664c67b92" Oct 31 05:46:16.625000 audit[5113]: USER_ACCT pid=5113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:16.629072 sshd[5113]: Accepted publickey for core from 139.178.68.195 port 35894 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:46:16.634559 kernel: audit: type=1101 audit(1761889576.625:585): pid=5113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:16.635823 sshd[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:46:16.647617 kernel: audit: type=1103 audit(1761889576.634:586): pid=5113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:16.634000 audit[5113]: CRED_ACQ pid=5113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:16.645128 systemd[1]: Started session-25.scope. Oct 31 05:46:16.645749 systemd-logind[1296]: New session 25 of user core. Oct 31 05:46:16.667595 kernel: audit: type=1006 audit(1761889576.634:587): pid=5113 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Oct 31 05:46:16.634000 audit[5113]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff157628c0 a2=3 a3=0 items=0 ppid=1 pid=5113 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:46:16.678561 kernel: audit: type=1300 audit(1761889576.634:587): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff157628c0 a2=3 a3=0 items=0 ppid=1 pid=5113 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:46:16.634000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 05:46:16.703220 kernel: audit: type=1327 audit(1761889576.634:587): proctitle=737368643A20636F7265205B707269765D Oct 31 05:46:16.703417 kernel: audit: type=1105 audit(1761889576.662:588): pid=5113 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:16.662000 audit[5113]: USER_START pid=5113 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:16.662000 audit[5116]: CRED_ACQ pid=5116 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:16.710711 kernel: audit: type=1103 audit(1761889576.662:589): pid=5116 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:17.515972 sshd[5113]: pam_unix(sshd:session): session closed for user core Oct 31 05:46:17.517000 audit[5113]: USER_END pid=5113 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:17.530627 kernel: audit: type=1106 audit(1761889577.517:590): pid=5113 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:17.534903 systemd[1]: sshd@24-10.244.21.74:22-139.178.68.195:35894.service: Deactivated successfully. Oct 31 05:46:17.536804 systemd[1]: session-25.scope: Deactivated successfully. Oct 31 05:46:17.537375 systemd-logind[1296]: Session 25 logged out. Waiting for processes to exit. Oct 31 05:46:17.539058 systemd-logind[1296]: Removed session 25. Oct 31 05:46:17.530000 audit[5113]: CRED_DISP pid=5113 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:17.553575 kernel: audit: type=1104 audit(1761889577.530:591): pid=5113 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:17.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.244.21.74:22-139.178.68.195:35894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:46:19.225884 kubelet[2197]: E1031 05:46:19.225815 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f447487f8-8md8h" podUID="da74ef2c-d536-4fbc-9b28-ba72dfbbfc21" Oct 31 05:46:21.225649 kubelet[2197]: E1031 05:46:21.225571 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-dq5pc" podUID="ac96e24b-c0dd-48fd-838b-a540fa2a89c0" Oct 31 05:46:22.665265 systemd[1]: Started sshd@25-10.244.21.74:22-139.178.68.195:35908.service. Oct 31 05:46:22.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.244.21.74:22-139.178.68.195:35908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:46:22.680077 kernel: kauditd_printk_skb: 1 callbacks suppressed Oct 31 05:46:22.680808 kernel: audit: type=1130 audit(1761889582.666:593): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.244.21.74:22-139.178.68.195:35908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:46:23.610123 sshd[5126]: Accepted publickey for core from 139.178.68.195 port 35908 ssh2: RSA SHA256:SNDspdr08ljC7u6YFsSEbFAM11P2/Di3eXjgL9Yd2IU Oct 31 05:46:23.609000 audit[5126]: USER_ACCT pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:23.622571 kernel: audit: type=1101 audit(1761889583.609:594): pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:23.623279 sshd[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 31 05:46:23.618000 audit[5126]: CRED_ACQ pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:23.633333 kernel: audit: type=1103 audit(1761889583.618:595): pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:23.646388 kernel: audit: type=1006 audit(1761889583.618:596): pid=5126 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Oct 31 05:46:23.646553 kernel: audit: type=1300 audit(1761889583.618:596): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe48b34f40 a2=3 a3=0 items=0 ppid=1 pid=5126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:46:23.618000 audit[5126]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe48b34f40 a2=3 a3=0 items=0 ppid=1 pid=5126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 31 05:46:23.640868 systemd[1]: Started session-26.scope. Oct 31 05:46:23.642358 systemd-logind[1296]: New session 26 of user core. Oct 31 05:46:23.618000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 31 05:46:23.656000 audit[5126]: USER_START pid=5126 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:23.673182 kernel: audit: type=1327 audit(1761889583.618:596): proctitle=737368643A20636F7265205B707269765D Oct 31 05:46:23.673332 kernel: audit: type=1105 audit(1761889583.656:597): pid=5126 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:23.673400 kernel: audit: type=1103 audit(1761889583.659:598): pid=5129 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:23.659000 audit[5129]: CRED_ACQ pid=5129 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:24.613442 sshd[5126]: pam_unix(sshd:session): session closed for user core Oct 31 05:46:24.615000 audit[5126]: USER_END pid=5126 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:24.627615 kernel: audit: type=1106 audit(1761889584.615:599): pid=5126 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:24.628214 systemd[1]: sshd@25-10.244.21.74:22-139.178.68.195:35908.service: Deactivated successfully. Oct 31 05:46:24.615000 audit[5126]: CRED_DISP pid=5126 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:24.630596 systemd[1]: session-26.scope: Deactivated successfully. Oct 31 05:46:24.631353 systemd-logind[1296]: Session 26 logged out. Waiting for processes to exit. Oct 31 05:46:24.632752 systemd-logind[1296]: Removed session 26. Oct 31 05:46:24.637362 kernel: audit: type=1104 audit(1761889584.615:600): pid=5126 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Oct 31 05:46:24.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.244.21.74:22-139.178.68.195:35908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 31 05:46:25.227724 kubelet[2197]: E1031 05:46:25.227636 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7ff9f49d5d-sjjrx" podUID="4433a427-a60f-4547-95ae-ea306784cb66" Oct 31 05:46:28.227168 kubelet[2197]: E1031 05:46:28.227092 2197 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-xdjq9" podUID="2af0e1f0-2997-4f89-a28e-8b72b5f8fd1c"